Abstract and Figures

S. Bentin and L. Y. Deouell (2000) have suggested that face recognition is achieved through a special-purpose neural mechanism, and its existence can be identified by a specific event-related potential (ERP) correlate, the N170 effect. In the present study, the authors explored the structural significance of N170 by comparing normal vs. morphed stimuli. They used a morphing procedure that allows a fine modification of some perceptual details (first-order relations). The authors also aimed to verify the independence of face identification from other cognitive mechanisms, such as comprehension of emotional facial expressions, by applying an emotion-by-emotion analysis to examine the emotional effect on N170 ERP variation. They analyzed the peak amplitude and latency variables in the temporal window of 120-180 ms. The ERP correlate showed a classic N170 ERP effect, more negative and more posteriorly distributed for morphed faces compared with normal faces. In addition, they found a lateralization effect, with a greater right-side distribution of the N170, but not directly correlated to the morphed or normal conditions. Two cognitive codes, structural and expression, are discussed, and the results are compared with the multilevel model proposed by V. Bruce and A. W. Young (1986, 1998).
Content may be subject to copyright.
The Journal of Psychology, 2005, 139(2), 176–192
Address correspondence to Michela Balconi, Department of Psychology, Catholic Uni-
versity, Largo Gemelli, 1 20123 Milan, Italy; michela.balconi@unicatt.it (e-mail).
Event-Related Potentials Related to
Normal and Morphed Emotional Faces
Department of Psychology
Catholic University of Milan, Italy
Neurological National Hospital C. Besta, Milan, Italy
ABSTRACT. S. Bentin and L. Y. Deouell (2000) have suggested that face recognition is
achieved through a special-purpose neural mechanism, and its existence can be identified
by a specific event-related potential (ERP) correlate, the N170 effect. In the present study,
the authors explored the structural significance of N170 by comparing normal vs. morphed
stimuli. They used a morphing procedure that allows a fine modification of some percep-
tual details (first-order relations). The authors also aimed to verify the independence of face
identification from other cognitive mechanisms, such as comprehension of emotional facial
expressions, by applying an emotion-by-emotion analysis to examine the emotional effect
on N170 ERP variation. They analyzed the peak amplitude and latency variables in the
temporal window of 120–180 ms. The ERP correlate showed a classic N170 ERP effect,
more negative and more posteriorly distributed for morphed faces compared with normal
faces. In addition, they found a lateralization effect, with a greater right-side distribution
of the N170, but not directly correlated to the morphed or normal conditions. Two cogni-
tive codes, structural and expression, are discussed, and the results are compared with the
multilevel model proposed by V. Bruce and A. W. Young (1986, 1998).
Key words: emotion, ERP correlates, facial expression, lateralization, structural encoding
FACE RECOGNITION IS ACHIEVED by a special-purpose mechanism that
probably uses processing strategies that differ from those used for the visual iden-
tification of most other objects (Darwin, 1872; Moscovitch, Winocur, &
Behrmann, 1997; Posamentier & Abdi, 2003). The results of a number of studies
have shown that specific visual objects, such as faces, elicit brain responses that
are different from those elicited by other kinds of visual objects (Gauthier &
Logothetis, 2000; Kanwisher & Moscovitch, 2000). These results are in line with
Balconi & Lucchiari 177
the model of face processing proposed by Bruce and Young (1986, 1998), who
suggest there are functional components in the human face processing system.
The model outlines seven distinct types of information that can be derived from
the face—pictorial, structural, semantic, identity, name, expression, and facial
speech. These types of information are called codes, and they differ in terms of
the cognitive and functional subprocesses that are triggered. Researchers can gen-
erally account for several aspects of face processing in terms of the aforemen-
tioned codes and view recognition memory as the formation and recovery of dif-
ferent kinds of codes.
For example, not only is one able to derive information about persons’ likely
ages or sex, but one can also interpret the meaning of their facial expressions.
Researchers refer to this as the formation of an expression code. Some study results
have shown a functional dissociation between a specific visual mechanism respon-
sible for the structural encoding and a higher level mechanism responsible for
associating the structural representations of a face with semantic information, such
as expression or identity (Bentin & Deouell, 2000; Lane, Chua, & Dolan, 1998).
The ability to do this shows that people can derive different information from the
face. In particular, the structural characteristics allow people to capture the essen-
tial aspects of the face so that they can distinguish the face from other objects.
Structural encoding produces a set of descriptions of the face that can include
view-centered descriptions as well as more abstract descriptions of the global con-
figuration and features. Maurer, Le Grand, & Mondloch (2002) elaborated on
Bruce and Young’s model (1998) by focusing on three configurational features of
the processing, which are used to refer to any phenomenon that involves perceiv-
ing relationships between the features of a stimulus, such as a face. The three
processes are (a) first-order relations (seeing a face because the features are
arranged with two eyes above the nose, etc.), (b) holistic processing (putting
together the features in a Gestalt), and (c) second-order relations (perceiving the
distance between features).
The authors in the aforementioned cognitive models of face recognition sug-
gest that the structural and semantic features of the face are processed indepen-
dently, and that, from a neuropsychological point of view, the brain regions
involved in the distinct aspects of face processing should be topographically sep-
arated (Holmes, Vuilleumier, & Eimer, 2003). The results of psychophysiologi-
cal studies on the event-related potentials (ERPs) of long latencies to determine
the time course and localization of face processing provide evidence that the
functional specificity of brain mechanisms is responsible for face processing
(Allison et al., 1994; Caldara et al., 2003; Carretié & Iglesias, 1995; Eimer,
2000c; Olivares, Iglesias, & Bobes, 1998).
Electrophysiological recordings have shown a consistent pattern of results.
In particular, the findings of studies in which ERP measures and magnetic reso-
nance imaging (MRI) were used, show the neural correlates for detecting a face
(N170 ERP variation) were larger than for many other stimuli, including houses,
178 The Journal of Psychology
cars, or eyes (Bentin & Deouell, 2000; Bentin & McCarthy, 1999; Eimer, 2000b;
Rossion et al., 2000). This electrophysiological component is not affected by face
familiarity (Bentin & Deouell; Jemel, George, Olivares, Fiori, & Renault, 1999),
facial expressions (Holmes et al., 2003; Puce, Allison, & McCarthy, 1999), or
other identity factors, such as race (Caldara et al., 2003). The structural encoding
process is probably the final stage of the visual analysis, and its product is an
abstract sensory representation of the face, a representation that is independent
of context or viewpoint.
If ERP components are to be used as markers for the successive stages
involved in face perception and recognition, then a face-specific ERP effect
should be sensitive to manipulations that affect the efficiency of face processing.
For example, the orientation of the stimulus may influence N170 amplitude, as
has been observed in previous studies (Rossion et al., 1999). This effect may
occur because some orientations of the face are more difficult to process than a
normal, upright, frontal view of the face in a recognition task. In a similar way,
a morphing procedure that disrupts face patterns in terms of their perceptual
details should influence the processing of facial stimuli, with a corresponding
increase in N170 peak amplitude (Jemel et al., 1999).
Objectives and Hypotheses
Our first aim of the present study was to validate the functional model pro-
posed by Bruce and Young (1998). In particular, we compared the structural effect
of face recognition for normal and morphed stimuli. A specific variation of the
structural index (N170) was attended for morphed faces compared with normal
emotional faces. We hypothesized that because N170 was unaffected by some
semantic factors, such as the familiarity of faces or the novelty of the stimulus, it
was possible that the effect of face morphing on N170 would be directly linked
to the disruption of face recognition caused by a structural morphing procedure
and not to its semantic value.
We used faces that showed a particular configurational pattern in terms of emo-
tional expression to show the independence of the structural information decoding
of the stimulus from the comprehension of the facial expression (Streit, Wölwer,
Brinkmeyer, Ihl, & Gaebel, 2000). The results of ERP studies of emotional expres-
sions have shown a specific ERP correlate for emotional expressions, such as a neg-
ative variation during a temporal window of 230–400 ms (Balconi & Pozzoli,
2003b; Davidson, 2001; Herrmann et al., 2002; Lane et al., 1998; Marinkovic &
Halgren, 1998; Sato, Takanori, Sakiko, & Michikazu, 2001; Vanderploeg, Brown,
& Marsh, 1987). No other researchers have explored the dissociation between struc-
tural manipulation and expression variability by using a morphing procedure on
emotional facial expression. Therefore, our second aim in this study was to distin-
guish ERP profiles that reflected the dynamic of neural patterns involved in facial
perception (structural level) from emotional expression analysis (expression level).
Balconi & Lucchiari 179
Our third aim was to consider a related experimental goal. We excluded any
effect of emotional expressions on N170 ERP variation in the normal condition
(not morphed). Therefore, we conducted an emotion-by-emotion analysis to
examine possible differences between emotional patterns (i.e., fear vs. happi-
ness). In fact, we expected that the N170 would be unaffected by expression vari-
ations (emotional patterns) because the early negative deflection is a specific
marker of structural features (and of their modifications) of the face.
Our fourth aim in the present study concerned the localization effect. Some
researchers have argued that the right hemisphere is preferentially involved in
configurational analysis, whereas the left hemisphere is more specialized for
analytical processing (Federmeier & Kutas, 2002). Therefore, the right hemi-
sphere may play a specific role in face processing. Bentin and Deouell (2000)
observed a relatively circumscribed region of N170 distribution that included the
posterior–inferior side of the temporal lobe. In addition, fMRI activation in
regions of the ventral temporo-occipital cortex and fusiform face area (FFA) is
larger for faces than it is for a variety of nonface objects. Therefore, we predicted
that the temporo-occipital evoked negativities would also preferentially involve
the right side of the scalp.
Twenty students (11 women, 9 men; mean age = 23.6 years), who were
enrolled in the psychology faculty of the Catholic University of Milan, took part in
the experiment. All the participants were right-handed, had normal or corrected-to-
normal vision, and were normal in terms of their neurological profile. They gave
informed written consent for participating in the study. They were not paid for their
The stimuli were grey scale photographs of male faces depicting different
emotional expressions (Ekman & Friesen, 1976). The photographs depicted fear-
ful, happy, and sad faces. Three of the faces were produced ex novo (i.e., obtained
by a morphing procedure; Calder, Young, Perrett, Etcoff, & Rowland, 1996). The
computerized morphing allowed for finer control over facial patterns. (If one were
to interpolate between two different expressions of prototypical emotions, then
one would create a distorted configuration that preserved the emotional value of
the expression but that showed incongruity in some of its components [first-level
relations] without disturbing symmetry [second-order relations]).
Finally, we used a neutral face (emotionally neutral expression) as the con-
trol stimulus. All the stimuli were tested for their emotional validity in a preex-
180 The Journal of Psychology
perimental phase (9 participants) in which we used a 5-point Likert-type scale.
We applied a univariate analysis of variance (ANOVA) for repeated measures to
the stimulus type (2, normal vs. morphed). We found no differences in the seman-
tic value of the stimuli between normal and morphed faces, F(1, 8) = 0.80, p =
.410, η
= .10. In fact, morphed faces were recognized as semantically valid stim-
uli that had a clear prevalence of one or another emotional value. Moreover, the
participants were able to categorize each stimulus in one specific emotional cat-
egory (i.e., anger vs. happiness) with a likelihood that was more than chance.
After we placed the electrodes on the participants, they sat in a comfortable
chair in a quiet, darkened room that had been tested for electromagnetic interfer-
ence. They sat 80 cm from the screen. The faces were presented on a 17-in. CRT
monitor, and they measured 7° vertical and 11° horizontal visual angles. A fixa-
tion point was projected at the center of the screen (a white point). Each face was
presented 10 times in a pseudorandom order for a total of 70 presentations, and
it was presented for 500 ms with an interstimulus interval (ISI) of 1,500 ms. The
ISI was constant, with the same dark background. The participants were naive
about the aims of the study, they were not informed about the emotional content
of the stimuli, and they were instructed only to pay attention to each face. There
was no overt task to minimize sensorimotor interference, so that the participant
would not be more attentive to the emotional stimuli than to the neutral stimuli.
The experiment consisted of a training block and an experimental block. In the
training block, before the ERPs were recorded, the participants were familiarized
with the overall procedure (a block of 21 trials).
Electroencephalograph (EEG) and ERP Recording Techniques
The participants wore an electrocap, which measured a continuous EEG
while they performed the tasks. We recorded the EEG from 14 Ag/AgCl elec-
trodes that were all referenced to the earlobes. We recorded the vertical electro-
oculogram (EOG) from bipolar electrodes above and at the outer canthus of the
right eye. The 14 scalp sites used according to the international 10–20 system
(Jasper, 1958) were: (a) four midline, Fz, Cz, Pz, and Oz; (b) right or left frontal,
F4 and F3; (c) central, C4 and C3; (d) temporal, T4 and T3; (e) occipital, O2 and
O1; and (f) parietal, P4 and P3. A ground electrode was placed on the forehead.
Electrode impedance was kept below 5K. The EEG data were sampled by an
amplifier (NeuroScan SYNAMP 4.2) for 1,000 ms (100-ms baseline) at 500 Hz.
We used an artifact rejections procedure (each epoch filtered using a digital 1–30
Hz band pass filter). Among the remaining trials, we also used a visual detection
procedure to discharge possibly artifacted EEG segments (i.e., trials invalidated
by interferences such as blinks). We rejected eight-percent epochs for EOG or
Balconi & Lucchiari 181
muscular artifacts. We computed the averaged evoked responses (offline) for
each participant. Therefore, for each participant, we computed three wave forms
elicited by (a) normal stimuli, (b) morphed stimuli, and (c) neutral stimuli. We
then obtained a grand mean average across the participants and for one tempo-
ral window, 120–180 ms. Peak amplitude measurement was quantified relative
to 100 ms prestimulus.
Behavioral Data
We asked the participant to identify the emotion on a 5-point Likert-type scale
so that we could explore the correct attribution of the semantic value to the emo-
tional expressions for both normal and morphed categories after the experimental
session was completed. We applied a univariate ANOVA for repeated measures to
the stimulus type (2, normal vs. morphed). We applied the Greenhouse–Geisser
(1959) correction to protect against Type I error when we evaluated effects with
more than one degree of freedom. The evaluation of normal stimuli did not differ
from morphed stimuli, F(1, 19) = 0.44, p = .531, η
= .13, with an attribution of
congruence and pertinence to normal stimuli (M = 4.30, SD = 0.45) and an ade-
quate emotional value to morphed stimuli (M = 4.02, SD = 0.30). We applied two
successive ANOVAs for repeated measures to three normal emotions and to three
morphed emotions. The two ANOVAs showed no significant effects, F(2, 19) =
1.22, p = .129, η
= .17, and F(2, 19) = 0.78, p = .511, η
= .10, for normal and mor-
phed emotions, respectively. Therefore, the within-category stimuli had a fairly
homogeneous representation from one side.
ERP Data
Peak amplitude (120–180-ms temporal window). We entered the peak amplitude
measurements into three separate two-way repeated-measures ANOVAs with the
stimulus type (normal, morphed, and neutral) and electrode sites as repeated mea-
sure factors. The mean and standard deviation values as a function of the condi-
tion are shown in Table 1.
We applied the Greenhouse–Geisser (1959) correction when we evaluated
effects with more than one degree of freedom. For the peak measurement, the
ANOVA showed no significant main effect for condition or for electrode sites
(see Table 2).
In contrast, the interaction Effect Condition × Sites was significant. The nor-
mal, morphed, and neutral stimuli produced different ERP variations, with a more
negative peak at about 150 ms poststimulus for the morphed condition.
We applied two successive analyses to the peak measurement so that we could
analyze more specifically the contribution of the position (frontal, Fz; central, Cz;
182 The Journal of Psychology
TABLE 1. Mean Amplitude of the Peak Measure in
Each Site Corresponding to the 120–180-ms Temporal
Peak amplitude
Electrode site Normal Morphed Neutral
M –4.81 –5.89 –4.13
SD 0.80 0.39 0.33
M –3.82 –8.39 –5.81
SD 0.49 0.49 0.28
M –3.42 –6.73 –4.44
SD 0.79 0.49 0.25
M –5.96 –6.69 –5.86
SD 0.55 0.65 0.25
M –6.11 –5.96 –5.88
SD 0.61 0.54 0.42
M –5.91 –5.68 –4.63
SD 0.38 0.43 0.58
M –4.21 –5.01 –4.12
SD 0.40 0.61 0.39
M –7.90 –6.96 –6.80
SD 0.51 0.30 0.50
M –6.29 –8.01 –7.03
SD 0.49 0.66 0.56
M –7.89 –9.26 –6.85
SD 0.52 0.50 0.54
M –6.09 –7.96 –5.96
SD 0.38 0.49 0.62
M –5.96 –7.75 –6.13
SD 0.41 0.27 0.47
(table continues)
Balconi & Lucchiari 183
posterior, Pz and Oz; and lateralization, right and left) on ERP variation. We con-
ducted an ANOVA for repeated measures, 3 (condition: normal, morphed, neu-
tral) × 4 (position: frontal, central, posterior, lateralization). The results of that
ANOVA are shown in Table 2. We found no differences for the two main effects
(also see Table 2). In contrast, the two-way interaction was significant. Therefore,
not only did the peak amplitude seem to differentiate the normal from the mor-
phed expressions, but the localization of the peak also differed as a function of
the frontal and posterior positions.
TABLE 1. (Continued)
Peak amplitude
Electrode site Normal Morphed Neutral
M –7.98 –8.33 –6.80
SD 0.35 0.68 0.22
M –9.54 –6.79 –7.02
SD 0.83 0.49 0.89
Total M 6.13 7.10 5.81
Note. Amplitude is expressed in µvol.
TABLE 2. Repeated-Measures ANOVAs Applied to the
Peak Amplitude Measure
A. ANOVA (3 × 14)
Condition 0.58 ns .09
Electrodes 2.13 ns .20
Condition × electrodes 2.20 .04 .31
B. ANOVA (3 × 4)
Condition 0.65 ns .11
Position 0.41 ns .07
Condition × Position 3.20 .01 .39
C. ANOVA (3 × 2)
Condition 0.38 ns
Side 2.60 .01 .34
Condition × Side 0.63 ns .12
184 The Journal of Psychology
We applied Dunnett’s (1955) test to the data to explain the two simple effects.
First, we compared the three types in each electrode, and we found that in the
posterior site (Oz), the morphed stimuli produced a more negative N170 peak
variation than did normal stimuli, F(1, 19) = 3.12, p < .003, η
= .41, and neutral
facial expressions, and F(1, 19) = 2.26, p = .19, η
= .28 (see Table 2). The Fz,
Cz, and Pz sites did not show different ERP profiles for the three experimental
conditions. Figure 1 shows the wave profiles or peak amplitudes for the Oz site
in each experimental condition.
For the second simple effect (paired comparison of the four electrodes in each
type) Dunnett’s (1955) test showed significant effect for the morphed type in the
Fz–Oz comparison, F(1, 19) = 3.09, p < .000, η
= .50, and Cz–Oz comparison,
F(1, 19) = 3.15, p <.000, η
= .51. No other comparison was statistically signifi-
cant. Finally, we applied a 3 (condition: normal, morphed, neutral) × 2 (side: right,
left) ANOVA for repeated measure to explore the lateralization effect. The analy-
sis showed that between the principal effects, only side was significant (see Table
2). In addition, the interaction effect was not significant. The mean values of peak
amplitude as a function of the lateralization (right and left) are shown in Figure 2.
Latency. For the peak variable, we entered the latency amplitude measurement
into three separate two-way repeated-measures ANOVAs, with the stimulus type
(normal, morphed, and neutral) and electrode sites as repeated-measure factors.
The mean and standard deviation values as a function of the condition are shown
FIGURE 1. Mean value of the peak amplitude in the posterior (Oz) site for
normal, morphed, and neutral conditions (negative up).
Balconi & Lucchiari 185
in Table 3. For the latency measurement, the ANOVA showed no significant main
effect for condition or electrode site, as well as for the interaction between effect
condition and site.
We applied two successive analyses to the peak measurement to analyze
more specifically the contribution of the lateralization (right and left) and posi-
tion (frontal, Fz; central, Cz; and posterior, Pz and Oz) on ERP variation. We con-
ducted an ANOVA for repeated measures 3 (condition: normal, morphed, neu-
tral) × 4 (position: Fz, Cz, Pz, Oz). No differences were found for the two main
effects or for their interaction. Therefore, the peak latency did not show any sig-
nificant variation as a function of the scalp localization. Finally, a 3 (condition:
normal, morphed, neutral) × 2 (side: right, left) ANOVA for repeated measure did
not reveal a significant difference for the two main factors and their interaction.
Emotion-by-Emotion Analysis
In the previous analysis, we observed a significant difference between the nor-
mal emotional patterns and the morphed patterns. Nevertheless, we could not
exclude that the N170 effect may have been influenced by the different emotional
value of facial expression. We applied an emotion-by-emotion analysis to the peak
data to specify the real significance of N170 (structural vs. semantic). In particu-
lar, we analyzed the ERP profiles of the three normal faces (i.e., those of happi-
ness, fear, and sadness). The morphological analysis of the peak profile and the
latency measures showed a similar wave profile for the three emotions inside the
temporal window of 120–180 ms, and it emerged at about 150 ms poststimulus.
Therefore, the results of the analysis underline the absence of effect of the emo-
tional value of emotional expression for the N170 structural index (see Figure 3).
In the present study, we investigated the effect of face morphing on the face-
specific N170 ERP marker and the independence of the structural analysis from the
Peak Amplitude
Normal Morphed Neutral
FIGURE 2. Peak amplitude as a function of the lateralization, right and left.
186 The Journal of Psychology
TABLE 3. Mean Latency of the Peak Measure in
Each Site Corresponding to the 120–180-ms Temporal
Peak latency
Electrode site Normal Morphed Neutral
M 146 158 152
SD 12 14 16
M 151 155 167
SD 13 17 13
M 142 152 154
SD 14 17 19
M 163 152 138
SD 48 38 40
M 159 169 139
SD 27 21 18
M 139 142 152
SD 15 13 11
M 147 136 138
SD 16 20 10
M 140 152 150
SD 14 15 26
M 155 154 147
SD 27 38 30
M 140 142 137
SD 14 15 29
M 144 148 150
SD 24 22 16
M 142 158 143
SD 16 26 37
(table continues)
Balconi & Lucchiari 187
semantic evaluation of the facial expression. In line with the results of previous
studies (Bentin & Deouell, 2000), our data showed a systematic negative deflection
at about 150 ms poststimulus, with an increasing negativity for the morphed stim-
uli compared with the normal facial stimuli. In contrast, the disruption of facial
details (manipulation of eyes, nose, or mouth) did not have an effect on the later
ERP correlates, such as the N400-like semantic effect. We also investigated whether
the emotional value of the stimulus affected the structural N170 index.
TABLE 3. (Continued)
Peak latency
Electrode site Normal Morphed Neutral
M 163 160 152
SD 28 37 26
M 136 142 137
SD 25 22 15
Total M 147 140 146
Note. Latency is expressed in ms.
Peak Amplitude (µ volt)
Fear Happiness
Emotional Expression
FIGURE 3. Peak amplitude of N150 for each emotional expression.
188 The Journal of Psychology
First, with regard to the effect of face morphing on the face-specific N170
ERP marker, we observed the N170 negative deflection for both normal emo-
tional and morphed faces, but the peak variation was higher for morphed faces.
The same difference was revealed between morphed and normal neutral stimuli,
whereas neutral and emotional normal stimuli were not dissimilar. Second, dif-
ferent ERP correlates could be associated with distinct decoding processes, and
we found that different face-specific ERP components are likely to reflect suc-
cessive stages in the processing of faces, from the perceptual analysis and struc-
tural encoding of face components to the semantic analysis. The functional-level
model proposed by Bruce and Young (1998) underscored a functional dissocia-
tion between the different codes of the face recognition process because they
operate as distinct subprocesses that are able to interpret the distinctive features
or components of a facial stimulus. From this perspective, the structure of the face
is seen as domain specific that focuses on the structural information of the con-
figuration, and the N170 may reflect an early expression-independent stage of
face processing. Similarly, as Maurer et al. (2002) suggested in their three-level
model, the manipulation and disruption of first-order relations (seeing a stimulus
as a face with some specific details) is a more sensitive mechanism in adults’ and
infants’ perception than is the mechanism responsible for the Gestalt (i.e., the
holistic processing of putting together the features in a Gestalt) or second-order
relations decoding (relative distance between the element of the configuration).
The results of the present study have provided evidence for the independence of
the two aspects of configural processing.
Moreover, despite the early differentiation between the two ERP profiles, we
observed no other differences between the normal and the morphed stimuli in the
long latency ERP components. Therefore, although we did not report the statis-
tical analysis of a second temporal window (350–450 ms), the morphological
comparison between the two waves suggests an overlapping of profiles between
the two conditions. This overlapping is particularly interesting if one considers
other studies in which results revealed a classic N400 effect as a result of a seman-
tic incongruence (Kutas & Federmeier, 2000) for both words and facial stimuli
(Balconi & Pozzoli, 2003a; Schendan, Ganis, & Kutas, 1998). Therefore, we can
conclude that the morphing procedure does not introduce a significant variation
in the emotional value of the morphed stimuli, as the behavioral responses of the
participants suggested, and that, in contrast, the morphing procedure has an effect
only on the structural elaboration.
The third finding of the present study was that the face-sensitive N170 com-
ponent is not modulated by emotional expression, irrespective of whether the
faces appeared in any type of emotion. The structural encoding of faces is cer-
tainly less sensitive to information derived from emotional facial expression than
from later recognition stages of face processing. This result is comparable with
previous studies that focused on the analysis of the familiarity effect. For exam-
ple, Eimer (2000c) found that structural encoding is independent from the level
Balconi & Lucchiari 189
of familiarity of the stimuli. In contrast, other cognitive factors may have an effect
on the ERP component, for example, spatial attention, which is supposed to mod-
ulate the structural encoding of faces (Holmes et al., 2003). Some researchers
have focused on the sensitivity of the N170 to some perceptual and cognitive fac-
tors, such as attentional focus (Eimer, 2000a) or the effect of the recognition
process (Holmes et al.).
The semantic value of the emotional stimuli seems to be processed only after
the structural features have been encoded. In fact, the long latency effect that is
revealed for emotional faces (probably the N200 effect) could be a marker of the
emotional involvement of the participants (degree of arousal), with an increase in
peak amplitude as a function of an increase in arousal (Balconi, 2003). Never-
theless, a wider analysis of the expression effect of the stimuli on the ERP cor-
relates should be realized (Balconi & Pozzoli, 2003a). In fact, in the present
research, although the morphological analysis of each emotion was specific and
accurate, it did not allow for a more systematic quantitative comparison between
the three emotional expressions. For this reason, we should conduct a follow-up
analysis in the future.
Another main effect revealed in the present study was the absence of a sig-
nificant differentiation between the three experimental conditions as a function
of the latency parameter. In fact, in contrast to other studies’ results (Eimer,
2000c), the abnormal condition of morphed stimuli did not show a delay for the
negative peak, but the latency of the peak was about 150 ms poststructural encod-
ing. As shown by previous study results, the participants are generally slower and
less accurate in recognizing a face presented in a composite or disrupted manner
because manipulation disrupts perceptual processing (Leder & Bruce, 2000;
Maurer et al., 2002).
Nevertheless, our result may suggest that, despite the more complex condi-
tion that participants have to process in the case of an abnormal stimulus, they
are able to realize the structural encoding in a brief time after the stimulus pre-
sentation. The processing of the face’s structural features must be considered as
an earlier perceptual operation that has to be concluded in a brief time and, there-
fore, before the processing of the first-order relation.
The localization and lateralization effects in the present study are also inter-
esting. First, the negative variation is produced in a specific cortical site. Our data
showed a posterior prevalence of the N170 and, to be more specific, a more neg-
ative peak variation for morphed figures. If one compares these results with those
of previous studies, one notices an ample margin of overlapping for the distribu-
tion of the structural marker (Bentin & Deouell, 2000). The temporo-occipital
distribution found in previous studies (Rossion et al., 2000) may confirm the pres-
ence of a cortical generator for the structural elaboration of the facial patterns.
In addition, if the N170 is an indicator of the underlying representation used
for normal face and morphed face encoding, then our results support the involve-
ment of both hemispheres, but with a greater contribution of the sites on the right
190 The Journal of Psychology
side of the scalp. Our results are compatible with those of many other studies (e.g.,
Davidson, 2001), which suggest that face processing is preferentially assumed by
the right hemisphere of the brain. In fact, the right hemisphere is specialized for
the processing of global and configurational aspects of a face. Our results are also
similar to those of studies that have demonstrated that hemispheric differences in
visual processing show that right superiority is owing to the perceptual bottom-up
direction of face processing, instead of a top-down process. In a similar way, one
would expect that there would be no differences between the two conditions (nor-
mal and morphed). In fact, we could hypothesize that the right hemisphere is spe-
cialized for normal and morphed emotional expressions, which require more pro-
cessing and reprocessing of the perceptual details of the stimulus.
In summary, we found that the face-specific N170 index represents a cogni-
tive module for the structural decoding of facial stimuli. We showed that the N170
responds more (i.e., with a greater amplitude) to faces than it does to other stim-
ulus categories (Eimer, 2000b), and that it is not affected by semantic factors,
such as facial expression or the emotional content of the stimulus. As reported by
previous researchers, other semantic factors such as the familiarity of the face
(Bentin & Deouell, 2000) or race (Caldara et al., 2003) did not influence the struc-
tural index. In addition, N170 is evoked by both explicit and implicit tasks, that
is, tasks that do not require the explicit identification of faces. Finally, our results
underlined that a specific population of neurons responds in a largely invariant
manner over a wide range of conditions that all require structural information pro-
cessing. The topographic profile of ERP shows that N170 is mainly distributed
in a specific region of the cortex, the temporo-occipital. One could think of this
region of the scalp as the cortical area for the structural encoding of the face.
These features of the negative ERP variation suggest that the N170 effect can be
the ERP marker of a cortical module, the structural encoding module proposed
by Bruce and Young (1998).
Allison, T., Ginter, H., McCarthy, G., Nobre, A. C., Puce, A., Luby, M., et al. (1994). Face
recognition in human extrastriate cortex. Journal of Neurophysiology, 71, 821–825.
Balconi, M. (2003). ERPs, semantic attribution, and facial expressions of emotions. Con-
sciousness and Emotion, 4, 63–81.
Balconi, M., & Pozzoli, U. (2003a). Conscious and unconscious in emotional face pro-
cessing. Journal of the International Neuropsychological Society, 9, 304–305.
Balconi, M., & Pozzoli, U. (2003b). Face-selective processing and the effect of pleasant
and unpleasant emotional expressions on ERP correlates. International Journal of Psy-
chophysiology, 49, 67–74.
Bentin, S., & Deouell, L. Y. (2000). Structural encoding and identification in face process-
ing: ERP evidence for separate mechanisms. Cognitive Neuropsychology, 17, 35–54.
Bentin, S., & McCarthy, M. (1999). Faces and face-components: ERP evidence for a dual-
mechanism for encoding physiognomic information. NeuroReport, 10, 823–827.
Bruce, V., & Young, A. W. (1986). Understanding face recognition. British Journal of Psy-
chology, 77, 305–327.
Balconi & Lucchiari 191
Bruce, V., & Young, A. W. (1998). A theoretical perspective for understanding face recog-
nition. In A. W. Young (Ed.), Face and mind. Oxford, England: Oxford University Press.
Caldara, R., Thut, G., Servoir, P., Michel, C. M., Bovet P., & Renault, B. (2003). Face ver-
sus non-face object perception and the “other-race” effect: A spatio-temporal event-
related potential study. Clinical Neurophysiology, 114, 515–528.
Calder, A. J., Young, A. W., Perrett, D. I., Etcoff, N. L., & Rowland, D. (1996). Categori-
cal perception of morphed facial expressions. Visual Cognition, 3, 81–117.
Carretié, L., & Iglesias, J. (1995). An ERP study on the specificity of facial expression
processing. International Journal of Psychophysiology, 19, 183–192.
Darwin, C. (1872). The expressions of emotions in man and animals. London: John Murray.
Davidson, R. J. (2001). The neural circuitry of emotion and affective style: Prefrontal cor-
tex and amygdala contribution. Social Science Information Sur Les Sciences Sociales,
40, 11–39.
Dunnett, C. W. (1955). A multiple comparison procedure for comparing several treatments
with a control. Journal of the American Statisitical Association, 50, 1096–1121.
Eimer, M. (2000a). Attentional modulations of event-related brain potentials sensitive to
faces. Cognitive Neuropsychology, 17, 103–117.
Eimer, R. (2000b). Effects of face inversion on the structural encoding and recognition of
faces: Evidence from event-related brain potentials. Cognitive Brain Research, 10,
Eimer, M. (2000c). Event-related brain potentials distinguish processing stages involved
in face perception and recognition. Clinical Neurophysiology, 111, 694–705.
Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting
Psychologists Press.
Federmeier, K. D., & Kutas, M. (2002). Picture the difference: Electrophysiological inves-
tigations of picture processing in the two cerebral hemispheres. Neuropsychologia, 40,
Gauthier, I., & Logothetis, N. K. (2000). Is face recognition not so unique after all? Cog-
nitive Neuropsychology, 17, 125–144.
Greenhouse, S. W., & Geisser, S. (1959). On methods in the analysis of profile data. Psy-
chometrika, 24, 95–112.
Herrmann, M. J., Aranda, D., Ellgring, H., Mueller, T. J., Strik, W. K., Heidrich, A., et al.
(2002). Face-specific event-related potential in humans is independent from facial
expression. International Journal of Psychophysiology, 45, 241–244.
Holmes, A., Vuilleumier, P., & Eimer, M. (2003). The processing of emotional facial
expressions is gated by spatial attention: Evidence from event-related brain potentials.
Cognitive Brain Research, 16, 174–184.
Jasper, H. H. (1958). The ten-twenty electrode system of the International Federation.
Electroencephalography and Clinical Neurophysiology, 10, 371–375.
Jemel, B., George, N., Olivares, E., Fiori, N., & Renault, B. (1999). Event-related poten-
tials to structural familiar face incongruity processing. Psychophysiology, 36, 437–452.
Kanwisher, N., & Moscovitch, M. (2000). The cognitive neuroscience of face processing:
An introduction. Cognitive Neuropsychology, 17, 1–13.
Kutas, M., & Federmeier, K. D. (2000). Electrophysiology reveals semantic memory use
in language comprehension. Trends in Cognitive Science, 4, 463–470.
Lane, R. D., Chua, P. M. L., & Dolan, R. J. (1998). Common effects of emotional valence,
arousal and attention on neutral activation during visual processing of pictures. Neu-
ropsychologia, 37, 989–997.
Leder, H., & Bruce, V. (2000). When inverted faces are recognized: The role of configural
information in face recognition. The Quarterly Journal of Experimental Psychology, 53,
192 The Journal of Psychology
Marinkovic, K., & Halgren, E. (1998). Human brain potentials related to the emotional
expression, repetition, and gender of faces. Psychobiology, 26, 348–356.
Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural pro-
cessing. Trends in Cognitive Neuroscience, 6, 255–260.
Moscovitch, M., Winocur, D., & Behrmann, M. (1997). What is special about face recog-
nition? Nineteen experiments on a person with visual object agnosia and dyslexia but
normal face recognition. Journal of Cognitive Neuroscience, 9, 555–604.
Olivares, E. I., Iglesias, J., & Bobes, M. A. (1998). Searching for face-specific long latency
ERPs: A topographic study of effects associated with mismatching features. Cognitive
Brain Research, 7, 343–356.
Posamentier, M. T., & Abdi, H. (2003). Processing faces and facial expressions. Neu-
ropsychology Review, 13, 113–143.
Puce, A. T., Allison, T., & McCarthy, G. (1999). Electrophysiological studies of human
face perception: III. Effects of top-down processing on face-specific potentials. Cere-
bral Cortex, 9, 445–458.
Rossion, B., Delvenne, J. F., Debatisse, D., Goffaux, V., Bruyer, R., Crommelinck, M., et
al. (1999). Spatio-temporal localization of the face inversion effect: An event-related
potentials study. Biological Psychology, 50, 173–189.
Rossion, B., Gauthier, I., Tarr, M. J., Despland, R., Bruyer, S., Linotte, M., et al. (2000).
The N170 occipito-temporal component is delayed and enhanced to inverted faces but
not to inverted objects: An electrophysiological account of face-specific processes in the
human brain. Neuroreport, 11, 69–74.
Sato, W., Takanori, K., Sakiko, Y., & Michikazu, M. (2001). Emotional expression boosts
early visual processing of the face: ERP recording and its decomposition by indepen-
dent component analysis. Neuroreport, 12, 709–714.
Schendan, H. E., Ganis, G., & Kutas, M. (1998). Neurophysiological evidence for visual
perceptual categorization of words and faces within 150 ms. Psychophysiology, 35,
Streit, M., Wölwer, W., Brinkmeyer, J., Ihl, R., & Gaebel, W. (2000). Electrophysiologi-
cal correlates of emotional and structural face processing in humans. Neuroscience Let-
ters, 278, 13–16.
Vanderploeg, R. D., Brown, W. S., & Marsh, J. T. (1987). Judgments of emotion in words
and faces: ERP correlates. International Journal of Psychophysiology, 5, 193–205.
Original manuscript received January 15, 2004
Final revision accepted May 19, 2004
... First, how do ERP correlates that are sensitive to valence for audiovisual stimuli relate to those for auditory and visual stimuli presented alone? Previous research has documented early ERP effects for both visual and auditory stimuli, although distinct early components were identified in different studies depending on specific experimental procedures and stimuli (Balconi & Lucchiari, 2005;Balconi & Lucchiari, 2007;Bayer & Schacht, 2014;Bublatzky & Schupp, 2011;Carretié & Iglesias, 1995;Carretié, Hinojosa, Martín-Loeches, Mercado, & Tapia, 2004;Conroy & Polich, 2007;Olofsson & Polich, 2007;Paulmann & Kotz, 2008;Schirmer & Cunter, 2017; for a review, see Olofsson, Nordin, Sequeira, & Polich, 2008). A late component, designated the late positive potential (LPP), has been consistently linked to affective processing using visual stimuli such as pictures, words, and faces (Bayer & Schacht, 2014; for review, see Hajcak et al., 2012). ...
... Given the differences in waveforms observed for auditory and visual affective stimuli in prior research, we do not anticipate a close correspondence will be the case (Balconi & Lucchiari, 2005;Balconi & Lucchiari, 2007;Bayer & Schacht, 2014;Bublatzky & Schupp, 2011;Carretié & Iglesias, 1995;Carretié, Hinojosa, Martín-Loeches, Mercado, & Tapia, 2004;Conroy & Polich, 2007;Olofsson & Polich, 2007;Paulmann & Kotz, 2008;Schirmer & Cunter, 2017; for a review, see Olofsson, Nordin, Sequeira, & Polich, 2008). Instead, in the current study, we will focus on audiovisual conditions and compare the amplitudes for conditions where both modalities are positive or negative with conditions in which one of these extremes in one modality is paired with neutral valence in the other modality. ...
This study used event-related potentials (ERPs) to investigate the time course of auditory, visual, and audiovisual affective processing. Stimuli consisted of naturalistic silent videos, instrumental music clips, or combination of the two, with valence varied at three levels for each modality and arousal matched across valence conditions. Affective ratings of the unimodal and multimodal stimuli showed evidence of visual dominance, congruency, and negativity dominance effects. ERP results for unimodal presentations revealed valence effects in early components for both modalities, but only for the visual condition in a late positive potential. The ERP results for multimodal presentations showed effects for both visual valence and auditory valence in three components, early N200, P300 and LPP. A modeling analysis of the N200 component suggested its role in the visual dominance effect, which was further supported by a correlation between behavioral visual dominance scores and the early ERP components. Significant congruency comparisons were also found for N200 amplitudes, suggesting that congruency effects may occur early. Consistent differences between negative and positive valence were found for both visual and auditory modalities in the P300 at anterior electrode clusters, suggesting a potential source for the negativity dominance effect observed behaviorally. The separation between negative and positive valence also occurred at LPP for the visual modality. Significant auditory valence modulation was found for the LPP, implying an integration effect in which valence sensitivity of the LPP emerged for the audiovisual condition. These results provide a basis for mapping out the temporal dynamics of audiovisual affective processing.
... The P2 is an early-stage component reflecting activity in response to emotional stimuli with relatively greater salience, especially negatively-valenced stimuli (Bar-Haim et al. 2005;Carretié et al. 2004). The N170 is an early-stage component specifically related to processing of facial structures or formations (Balconi and Lucchiari 2005;Eimer 2000). The P3 is a relatively later-stage component (still within early attentional processing) related to strategic regulation of attention (e.g., Bruin et al. 2000), response selection (Falkenstein et al. 1999) and response inhibition (Huster et al. 2013). ...
... The current findings for N170 extend previous adult and child anxiety work that showed enhanced N170 components for threat stimuli (e.g., Balconi and Lucchiari 2005;Bentin et al. 1996;Eimer 2000;Kolassa and Miltner 2006;Mueller et al. 2009;O'Toole et al. 2013). Compared to Control youth, we observed that youth with anxiety disorders exhibited significantly larger (more negative) N170 mean amplitudes when viewing both threatening and neutral stimulus trials. ...
Late-stage attentional processing of threatening stimuli, quantified through event-related potentials (ERPs), differentiates youth with and without anxiety disorders. It is unknown whether early-stage attentional processing of threatening stimuli differentiates these groups. Examining both early and late stage attentional processes in youth may advance knowledge and enhance efforts to identify biomarkers for translational prevention and treatment research. Twenty-one youth with primary DSM-IV-TR anxiety disorders (10 males, ages 8–15 years) and 21 typically developing Controls (15 males, ages 8–16 years) completed a dot probe task while electroencephalography (EEG) was recorded, and ERPs were examined. Youth with anxiety disorders showed significantly larger (more positive) P1 amplitudes for threatening stimuli than for neutral stimuli, and Controls showed the opposite pattern. Youth with anxiety showed larger (more negative) N170 amplitudes compared with Controls. Controls showed significantly larger (more positive) P2 and P3 amplitudes, regardless of stimuli valence, compared with youth with anxiety disorders. ERPs observed during the dot probe task indicate youth with anxiety disorders display distinct neural processing during early stage attentional orienting and processing of faces; this was not the case for Controls. Such results suggest these ERP components may have potential as biomarkers of anxiety disorders in youth.
... In accordo con la letteratura si rilevano N400 più marcate per le zone Centro-Parietali mediane soprattutto in interazione con variabili come tipo di bene, condizione e scelta d'acquisto. La N440 è una negatività indicante il riconoscimento, l'elaborazione semantica e la negazione dell'aspettativa dovuta a pattern incongruenti di stimolazioni (Balconi e Lucchiari, 2005) legati alla presentazione di beni associati a stimoli distrattori a differente contenuto emotivo e presentati in condizione sovraliminale o subliminale (Kutas e Hillyard, 1980). ...
... In particolare sia i beni di consumo che quelli di lusso fanno rilevare N400 più marcate nell'area Centrale nelle condizioni sovraliminali e subliminali sia per le scelte d'acquisto positive che per quelle negative. La N400, componente tardiva degli ERP, rappresenta infatti processi di riconoscimento, di elaborazione semantica e di negazione dell'aspettativa creata da pattern incongruenti di stimolazione (Balconi e Lucchiari, 2005). ...
Full-text available
Le prospettive del neuromarketing e della psicologia cognitiva sono alla base della seguente ricerca che indaga le scelte e le preferenze dei consumatori usando misure comportamentali, psicofisiologiche e neuropsicologiche. Ricerche recenti hanno trovato che componenti cognitiva ed emotive di stimoli pubblicitari elicitano specifiche risposte in differenti aree corticali e hanno mostrato che entrambi i sistemi, deliberativo e intuitivo, sono coinvolti nella presa di decisione. Nella seguente ricerca abbiamo analizzato se i beni di consumo e di lusso, associati a stimoli emotivi (neutri, negativi o positivi) mostrati in condizione sovraliminale o subliminale, producano variazioni nei tempi di risposta, nelle preferenze soggettive, nelle misure autonomiche (attività elettrodermica, pulsazioni, pressione sanguigna) e negli indici ERP. Gli stimoli emotivi (ad alto arousal, con valenza positiva o negativa) derivano dal database IAPS. I risultati rivelano che SCL (conduttanza cutanea) è significativamente più alta per stimoli emotivi (positivi o negativi) rispetto a quelli neutri e per le femmine nella condizione subliminale; l’effetto ERN (negatività relata all’errore) è più alto in condizione subliminale all’interno dell’area temporale sinistra; l’N200 (indice di attenzione) è più marcato nella condizione subliminale e nell’area parietale; l’indice P300 (risposta di allerta) è più elevato nell’area parietale destra per la condizione sovraliminale. Quindi la ricerca ha confermato l’ipotesi che i consumatori non osservano e elaborano le informazioni in modo neutro. Al contrario, differenti condizioni di stimolazione e il tipo di stimolo emotivo influenzano le scelte dei consumatori, rispetto al modo in cui rispondono in modo automatico o deliberato.
... The research protocol, fusing a bottom-up conversational approach to EEG analysis seems to show promising potential, correctly addressing the set aims and highlighting interesting results. Both cognitive and affective processes were investigated, revealing their altered modulation due to the condition factor (Balconi & Lucchiari, 2005) Naturally, this research comes with strengths and weaknesses. Methodologically, the research design can be considered highly innovative. ...
The digitalization of learning in the organization represents both a necessity and an opportunity. Little to no research explored how distance training affects cognitive and affective processes in individuals and workgroups. For this reason, in this work, we propose an hyperscanning research design where conversational analysis is used to compare neurophysiological measures (frequency band analysis: delta, theta, alpha, and beta) between an equivalent training session carried out in two conditions (face-to-face and remote), by collecting electroencephalographic data (EEG) on a trainer and three groups of trainees. We theoretically describe the protocol, and we further report initial explorative results. Data showed a significant effect of the condition on both theta and beta waves, with higher synchronization for the face-to-face setting. Also, trainees seem more impacted by the delivery modality compared to the trainer. This work highlights the relevance of neurophysiological measures to test e-learning efficacy.
... Three commercials were selected that refer to the COVID-19 theme: "Play for the World, " "You Cannot Stop LA, " and "You Cannot Stop Us. " The typical communication of this brand, full of emotional and motivational elements that aim at empowering and inspiring the audience, is intertwined with the narrative of today's difficult historical period, characterized by the COVID-19 pandemic. These stimuli have been selected because they generally reflect the characteristics that have been adopted for the choice of stimuli in the work of Ohme et al. (2010); in fact, they can be distinguished in two frames: emotional frames, characterized by sequences of images with a high emotional impact; information frames, characterized by sequences of scenes in which there are mainly captions, emotional phrases, referring to the pandemic situation from COVID-19, superimposed images, and sequences where the brand logo is shown (Balconi and Lucchiari, 2005). ...
Full-text available
The COVID-19 pandemic has prompted the production of a vast amount of COVID-19-themed brand commercials, in an attempt to exploit the salience of the topic to reach more effectively the consumers. However, the literature has produced conflicting findings of the effectiveness of negative emotional contents in advertisings. The present study aims at exploring the effect of COVID-19-related contents on the hemodynamic brain correlates of the consumer approach or avoidance motivation. Twenty Italian participants were randomly assigned to two different groups that watched COVID-19-related or non-COVID-19-related commercials. The hemodynamic response [oxygenated (O2Hb) and deoxygenated hemoglobin modulations] within the left and right prefrontal cortices (PFC) was monitored with Functional Near-Infrared Spectroscopy (fNIRS) while brand commercials were presented, as the prefrontal lateralization was shown to be indicative of the attitude toward the brand and of the approach-avoidance motivation. First, the findings showed that the COVID-19-related contents were able to prompt emotional processing within the PFC to a higher extent compared to contents non-related to COVID-19. Moreover, the single-channel analysis revealed increased O2Hb activity of the left dorsolateral PFC compared to the left pars triangularis Broca’s area in the group of participants that watched the COVID-19-related commercials, suggesting that the commercials may have driven participants to dedicate more attention toward the processing of the emotional components compared to the semantic meaning conveyed by the ad. To conclude, despite expressing unpleasant emotions, commercials referring to the highly emotional pandemic experience may benefit the advertising efficacy, increasing the capability to reach customers.
... In human relationships, the face is a significant social stimulus [50][51][52][53][54]. Face processing may be separated into a first perceptive phase, in which the person completes the "structural codes" of face and a second phase in which the subject completes the "expression code" implicated in the decoding of emotional facial expressions [55]. The first is thought to be processed separately from complex facial information such as emotional meaning [56][57][58][59][60][61]. Here we argue that the simple presentation of a face receiving painful or non-painful stimulation in the context of pain observation in others activates frontal brain regions connected to emotional regulation of the empathic response, more than somatosensory areas. ...
This research explored how the manipulation of interoceptive attentiveness (IA) can influence the frontal (dorsolateral prefrontal cortex (DLPFC) and somatosensory cortices) activity associated with the emotional regulation and sensory response of observing pain in others. 20 individuals were asked to observe face versus hand, painful/non-painful stimuli in an individual versus social condition while brain hemodynamic response (oxygenated (O2Hb) and deoxygenated hemoglobin (HHb) components) was measured via functional Near-Infrared Spectroscopy (fNIRS). Images represented either a single person (individual condition) or two persons in social interaction (social condition) both for the pain and body part set of stimuli. The participants were split into experimental (EXP) and control (CNT) groups, with the EXP explicitly required to concentrate on its interoceptive correlates while observing the stimuli. Quantitative statistical analyses were applied to both oxy- and deoxy-Hb data. Firstly, significantly higher brain responsiveness was detected for pain in comparison to no-pain stimuli in the individual condition. Secondly, a left/right hemispheric lateralization was found for the individual and social condition, respectively, in both groups. Besides, both groups showed higher DLPFC activation for face stimuli presented in the individual condition compared to hand stimuli in the social condition. However, face stimuli activation prevailed for the EXP group, suggesting the IA phenomenon has certain features, namely it manifests itself in the individual condition and for pain stimuli. We can conclude that IA promoted the recruitment of internal adaptive regulatory strategies by engaging both DLPFC and somatosensory regions towards emotionally relevant stimuli.
... Ratings on valence, intensity and authenticity were collected only from the static faces. These ratings would have been likely different if collected from faces presented in the looming and receding conditions, which in turn impact ERP responses to emotions (Utama, Takemoto, Koike, & Nakamura 2009) or between displays of static and morphed facial expressions (Bentin & Deouell, 2000;Balconi & Lucchiari, 2005). Finally, we prioritized passive observation to minimize cognitive load and mood induction. ...
Objectives: A fundamental disturbance in behavioural and social adaptation is a most distinctive feature in individuals with psychopathy. Individuals with psychopathy are characterized by a specific pattern of difficulties to successfully adapt their behaviour to respect social norms related to inter-individual interaction. Regulation of peripersonal space and emotional processing are heralded as the core features of social dysfunction in psychopathy. These cognitive processes have been largely explored independently in the perspective of the psychopathy disorder, but the dynamics underpinning emotional processing toward peripersonal space and the relation to psychopathy personality traits are rather unexplored. Methods: We measured event-related potentials during presentation of visual stimuli representing human faces within different localization toward peripersonal space. Additionally, the level of psychopathy traits and the behavioural responses of 20 participants were assessed during passive viewing task using emotional faces with Looming, Receding and Static presentation. Results: Results suggest that event-related potentials are differently modulated across emotional and localization related to peripersonal space. A specific association was reported between late positive potential component and psychopathic traits. Discussion: These results suggest that context related to movement in peripersonal space can modulate the subjective relevance of specific emotions. They also suggest that individuals with psychopathic traits display specific modulation of LPP component regarding emotional and peripersonal space processing.
... Apesar de não existir uma teoria específica sobre o funcionamento do processamento de estímulos emocionais em humanos, sabe-se que este processo envolve mecanismos como a ativação fisiológica, a avaliação do estímulo, a experiencia subjetiva, a expressão e o comportamento direcionado para um objetivo (Clore & Ortony, 2008;Niedenthal & Brauer, 2012;Phillips, Drevets, Rauch & Lane, 2003a A perceção de uma emoção é um processo complexo, sendo considerado um processo cognitivo de elevada ordem (Balconi & Lucchiari, 2005), que implica muito mais do que a simples seleção e associação de um comportamento individual a um termo lexical que define uma determinada emoção. O simples reconhecimento de uma face contempla diversos processos, desde a identificação de um estímulo como uma face (tarefa que é realizada pelo sistema visual), á discriminação da identidade do individuo (feita através do processamento de pistas visuais presentes na face). ...
Full-text available
O objectivo deste artigo assenta numa revisão de literatura sobre défices de reconhecimento emocional na esquizofrenia, autismo, depressão e indivíduos com lesão cerebral traumática, colocando em evidência a forma como esta capacidade se encontra limitada nestes indivíduos, bem como os défices presentes em diversas estruturas e redes neuronais que levam à génese e manutenção destas limitações; é ainda salientado a importância do aprimoramento desta capacidade em profissionais de saúde. Procuramos também evidenciar de que forma a expressão e leitura de emoções é essencial para uma melhor adaptação e funcionamento social, dando destaque às principais teorias que explicam o processamento emocional a nível fisiológico e neuropsicológico. Realizamos igualmente uma revisão sobre os principais softwares computacionais utilizados para reabilitação e intervenção em défices emocionais e uma breve nota sobre o processo de construção destes softwares. Defendemos assim o investimento científico nesta área em Portugal, dada a pertinência do reconhecimento emocional no funcionamento social de populações clínicas e no desempenho de profissionais de saúde, uma vez que não existe ainda, segundo a revisão efectuada, nenhum treino de reconhecimento emocional desta índole em língua portuguesa.
Aim: This study explores interoceptive attentiveness (IA) influence on autonomic reactivity related to pain and self-regulation during situations evoking physiological mirroring for pain. Methods: 20 participants observed face/hand, painful/non-painful stimuli in an individual versus social condition while the autonomic response was measured [Electrodermal activity, Pulse Volume Amplitude (PVA), and Heart Rate (HR)] was measured. The sample was divided into experimental (EXP) subjects, required to focus on their interoceptive correlates while observing the stimuli, and the control (CNT) group. HR inter-beat interval (IBI), and HR Variability (HRV) were calculated. Results: Results showed high accuracy to painful and non-painful stimuli recognition. Regarding autonomic indices, higher PVA values were detected for hand painful versus non-painful stimuli, whereas for the EXP group a significant activation of IBI was found for face painful vs non-painful stimuli. Conclusion: In the context of observation of pain in others, PVA and IBI could be respectively markers of mirroring mechanisms and autonomic self-regulation mediated by IA.
Facial emotion recognition and theory of mind abilities are important aspects of social cognition. Genes within the X chromosome may influence these abilities as males show increased vulnerability to impaired social cognition compared to females. An influence of a single nucleotide polymorphism (SNP), rs7055196 (found within the X-linked EFHC2 gene), on facial fear recognition abilities has recently been reported in Turner Syndrome. This thesis explores the influence of SNP rs7055196 on aspects of social cognition in healthy males. Males possessing the G allele showed poorer facial fear recognition accuracy compared to males possessing the A allele. This group difference in fear recognition accuracy was not due to a difference in gaze fixations made to the eye or mouth regions. Males possessing the G allele also showed smaller N170 amplitudes in response to faces compared to males possessing the A allele. These results suggest males possessing the A allele may use a more holistic / configural face processing mechanism compared to males possessing the G allele, and this difference may account for the difference in fear recognition accuracy between the groups. Males possessing the G allele were also less accurate at inferring others’ mental states during the Reading the Mind in the Eyes task, and showed reduced activity in the right superior temporal gyrus, left inferior parietal lobule and left cingulate gyrus during this task compared to males possessing the A allele. SNP rs7055196 may therefore also influence theory of mind abilities, with males possessing the A allele showing better theory of mind than those possessing the G allele. This result may reflect higher empathising abilities in the males possessing the A allele. These results suggest an influence of SNP rs7055196 on social cognitive abilities in males. This may help to explain the sex difference in vulnerability to impaired social cognition.
Full-text available
Behavioral studies have shown that picture-plane inversion impacts face and object recognition differently, thereby suggesting face-specific processing mechanisms in the human brain. Here we used event-related potentials to investigate the time course of this behavioral inversion effect in both faces and novel objects. ERPs were recorded for 14 subjects presented with upright and inverted visual categories, including human faces and novel objects (Greebles). A N170 was obtained for all categories of stimuli, including Greebles. However, only inverted faces delayed and enhanced N170 (bilaterally). These observations indicate that the N170 is not specific to faces, as has been previously claimed. In addition, the amplitude difference between faces and objects does not reflect face-specific mechanisms since it can be smaller than between non-face object categories. There do exist some early differences in the time-course of categorization for faces and non- faces across inversion. This may be attributed either to stimulus category per se (e.g. face-specific mechanisms) or to differences in the level of expertise between these categories.
Full-text available
In order to study face recognition in relative isolation from visual processes that may also contribute to object recognition and reading, we investigated CK, a man with normal face recognition but with object agnosia and dyslexia caused by a closed-head injury. We administered recognition tests of up right faces, of family resemblance, of age-transformed faces, of caricatures, of cartoons, of inverted faces, and of face features, of disguised faces, of perceptually degraded faces, of fractured faces, of faces parts, and of faces whose parts were made of objects. We compared CK's performance with that of at least 12 control participants. We found that CK performed as well as controls as long as the face was upright and retained the configurational integrity among the internal facial features, the eyes, nose, and mouth. This held regardless of whether the face was disguised or degraded and whether the face was represented as a photo, a caricature, a cartoon, or a face composed of objects. In the last case, CK perceived the face but, unlike controls, was rarely aware that it was composed of objects. When the face, or just the internal features, were inverted or when the configurational gestalt was broken by fracturing the face or misaligning the top and bottom halves, CK's performance suffered far more than that of controls. We conclude that face recognition normally depends on two systems: (1) a holistic, face-specific system that is dependent on orientationspecific coding of second-order relational features (internal), which is intact in CK and (2) a part-based object-recognition system, which is damaged in CK and which contributes to face recognition when the face stimulus does not satisfy the domain-specific conditions needed to activate the face system.
Emotion and attention heighten sensitivity to visual cues. How neural activation patterns associated with emotion change as a function of the availability of attentional resources is unknown. We used positron emission tomography (PET) and 15O-water to measure brain activity in male volunteers while they viewed emotional picture sets that could be classified according to valence or arousal. Subjects simultaneously performed a distraction task that manipulated the availability of attentional resources. Twelve scan conditions were generated in a 3 x 2 x 2 factorial design involving three levels of valence (pleasant, unpleasant and neutral), two levels of arousal and two levels of attention (low and high distraction). Extrastriate visual cortical and anterior temporal areas were independently activated by emotional valence, arousal and attention. Common areas of activation derived from a conjunction analysis of these separate activations revealed extensive areas of activation in extrastriate visual cortex with a focus in right BA18 (12, -88, -2) (Z=5.73, P < 0.001 corrected) and right anterior temporal cortex BA38 (42, 14, -30) (Z=4.03, P < 0.05 corrected). These findings support an hypothesis that emotion and attention modulate both early and late stages of visual processing.
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.