Chapter

A theoretical perspective for understanding face recognition

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Until present, the temporal aspects of the processing of facial expressions have been examined in several electrophysiological studies. In accordance with the notion that the facial configuration and emotional facial expression are processed independently and by different brain structures [2,3], most studies on event-related potentials (ERP) found that the type of facial expression did not influence the facesensitive N170 or vertex positive potential (VPP) (e.g.45678). ERP modulations sensitive to the nature of the expressed emotion on the face, however, have been found as early as 90 ms post-stimulus [9]. ...
... [4,5,7,8]), however, the latency and amplitude of the N170 were somewhat modulated by the emotional expression of the face. This suggests that the structural encoding of faces and the processing of emotional expression are not totally independent processes as assumed previously [2,3]. A possible explanation for the discrepancy between the various ERP studies is that the strength of habituation differed considerably. ...
Article
Crying is an attachment behavior, which in course of evolution had survival value. This study examined the characteristics of the face-sensitive N170, and focused on whether crying expressions evoked different early event-related potential waveforms than other facial expressions. Twenty-five participants viewed photographs of six facial expressions, including crying, and performed an implicit processing task. All stimuli evoked the N170, but the facial expression modulated this component in terms of latency and amplitude to some extent. The event-related potential correlates for crying faces differed mostly from those for neutral and fear faces. The results suggest that facial expressions are processed automatically and rapidly. The strong behavioral and emotional responses to crying appear not to be reflected in the early brain processes of face recognition.
... The experiments reported here have a secondary goal of examining the interrelation between structural face encoding and assessment of social category information. According to Bruce and Young (1986), structural face encoding and extraction of social category information (which they refer to as visually derived semantic information) are mediated by separate, sequentially occurring processing stages, with social category processing being dependent on prior structural encoding. In ERP research, structural encoding has been consistently linked to the N170 component (Bentin et al., 1996;Eimer, 2000). ...
... We also note that although stimulus complexity differed between the two experiments, we are not able to determine the relative importance of individual features on attention (i.e., dots on some of the faces vs. presence of vegetable names before each face). Bruce and Young (1986) have argued that structural encoding and the processing of social category information (which they label visual semantic processing) are distinct, sequentially ordered stages, with social category processing depending on the representation constructed during structural encoding. However, the research of Mouchetant-Rostaing and colleagues (Mouchetant-Rostaing & Giard, 2003;Mouchetant-Rostaing et al., 2000), our past research (Ito & Urland, 2003), and the present results (especially the N100 race effects in Experiment 2) suggest that social category processing need not depend on structural encoding. ...
Article
Full-text available
In two experiments, event-related potentials were used to examine the effects of attentional focus on the processing of race and gender cues from faces. When faces were still the focal stimuli, the processing of the faces at a level deeper than the social category by requiring a personality judgment resulted in early attention to race and gender, with race effects as early as 120 msec. This time course corresponds closely to those in past studies in which participants explicitly attended to target race and gender (Ito & Urland, 2003). However, a similar processing goal, coupled with a more complex stimulus array, delayed social category effects until 190 msec, in accord with the effects of complexity on visual attention. In addition, the N170 typically linked with structural face encoding was modulated by target race, but not by gender, when faces were perceived in a homogenous context consisting only of faces. This suggests that when basic-level distinctions between faces and nonfaces are irrelevant, the mechanism previously associated only with structural encoding can also be sensitive to features used to differentiate among faces.
... The complex visual stimuli to which the fusiform face area responds are socially meaningful patterns that may signify variables other than the personal identity of their bearers exclusively (e.g., species, age, gender, emotional state, etc.). Bruce and Young (1998) argue that there are at least seven distinct types of information that can be derived from faces, and that everyday recognition of familiar faces can be described, to a certain extent, in terms of the sequential access of different codes. The authors maintained that recognition of expression and identity, an so forth, can all be achieved independently (cf. ...
... Averaging the ages of pictorial persons concurred with the perception of a property that can be achieved irrespectively of other facial features (cf. Bruce & Young, 1998); this task required the longest fixations. The task of Similarity Recognition called upon perception of fact. ...
Article
Full-text available
Continuous pictorial narratives (CPN) present protagonists repeatedly, yet adult viewers report seeing different persons instead. We presented 12 CPNs to 16 adults, whose oculomotor and verbal responses were continuously recorded. We addressed (a) the capability of instructions to compensate for lacking aesthetic fluency (Smith & Smith, 2006); (b) perceptual-cognitive processes accompanying Person Repetition Detection (PRD); (c) formal stimulus properties related with PRD. The results demonstrated that (a) search for presented persons especially similar to each other yielded more PRD than estimation of average age or aesthetic evaluation; (b) saccades between picture regions with repeated persons and PRDs were positively correlated; and (c) formal properties and PRD are not reliably correlated.
... While there is evidence of double dissociation of impairment between the processing of familiar and unfamiliar faces (e.g. Parry et al., 1991; Young et al., 1993; see e.g. Bruce and Young, 1998; Hancock et al., 2000, for reviews) this does not suffice to establish that separate cognitive systems are involved. It is important to note that different tasks are used to investigate the processing of familiar and unfamiliar faces. ...
... At most, unfamiliar faces must be remembered for an hour or two. Familiar faces, in contrast, are presented for familiarity decision or retrieval of information about the target individual (e.g. Bruce and Young, 1998) and this demands that a face be remembered over a much longer period. Therefore tasks performed on familiar and unfamiliar faces do not place equivalent demands on memory. ...
Article
The two papers by Bobes et al. (2003, this issue) and by Sperber and Spinnler (2003, this issue) add to the large body of literature demonstrating covert face recognition in prosopagnosia. This viewpoint will offer some perspectives on this interesting phenomenon. First, a re-analysis of the empirical literature will indicate an important misconception concerning the preserved abilities of prosopagnosics. The second section will briefly assess the contribution of Bobes et al. (2003, this issue) and Sperber and Spinnler (2003, this issue) to the debate about the locus, in cognitive terms, of the underlying causal deficit in prosopagnosia with covert face recognition. Both papers make reference to the two main models seeking to explain this phenomenon: the model proposed by Burton and colleagues (Burton et al., 1991; Burton and Young, 1999; Young and Burton, 1999) and that proposed by Farah and colleagues (Farah et al., 1993; O'Reilly and Farah, 1999). Finally, an observation will be offered concerning representations of faces in the Burton et al. (1991) model.
... Therefore, DLPFC may act as a regulatory structure of emotional response directed towards a highly significant stimulus [47][48][49]. In human relationships, the face is a significant social stimulus [50][51][52][53][54]. Face processing may be separated into a first perceptive phase, in which the person completes the "structural codes" of face and a second phase in which the subject completes the "expression code" implicated in the decoding of emotional facial expressions [55]. The first is thought to be processed separately from complex facial information such as emotional meaning [56][57][58][59][60][61]. ...
Article
This research explored how the manipulation of interoceptive attentiveness (IA) can influence the frontal (dorsolateral prefrontal cortex (DLPFC) and somatosensory cortices) activity associated with the emotional regulation and sensory response of observing pain in others. 20 individuals were asked to observe face versus hand, painful/non-painful stimuli in an individual versus social condition while brain hemodynamic response (oxygenated (O2Hb) and deoxygenated hemoglobin (HHb) components) was measured via functional Near-Infrared Spectroscopy (fNIRS). Images represented either a single person (individual condition) or two persons in social interaction (social condition) both for the pain and body part set of stimuli. The participants were split into experimental (EXP) and control (CNT) groups, with the EXP explicitly required to concentrate on its interoceptive correlates while observing the stimuli. Quantitative statistical analyses were applied to both oxy- and deoxy-Hb data. Firstly, significantly higher brain responsiveness was detected for pain in comparison to no-pain stimuli in the individual condition. Secondly, a left/right hemispheric lateralization was found for the individual and social condition, respectively, in both groups. Besides, both groups showed higher DLPFC activation for face stimuli presented in the individual condition compared to hand stimuli in the social condition. However, face stimuli activation prevailed for the EXP group, suggesting the IA phenomenon has certain features, namely it manifests itself in the individual condition and for pain stimuli. We can conclude that IA promoted the recruitment of internal adaptive regulatory strategies by engaging both DLPFC and somatosensory regions towards emotionally relevant stimuli.
... The possibility that a cognitive "cortical code" for emotion expression recognition exists is pointed out by our data on the negative ERP component, which seems to be strictly related to emotional value of face. This pattern suggests a possible dissociation between a specific visual mechanism responsible for the encoding of faces and a "higher level" mechanism for associating the facial representation with semantic information about the emotion that face represents, as previously suggested by Bruce and Young (1998). Second, emotion specificity of N2 was verified as a function of the emo-tional content of facial expressions. ...
Article
Full-text available
Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.
... Our first aim of the present study was to validate the functional model proposed by Bruce and Young (1998). In particular, we compared the structural effect of face recognition for normal and morphed stimuli. ...
Article
Full-text available
S. Bentin and L. Y. Deouell (2000) have suggested that face recognition is achieved through a special-purpose neural mechanism, and its existence can be identified by a specific event-related potential (ERP) correlate, the N170 effect. In the present study, the authors explored the structural significance of N170 by comparing normal vs. morphed stimuli. They used a morphing procedure that allows a fine modification of some perceptual details (first-order relations). The authors also aimed to verify the independence of face identification from other cognitive mechanisms, such as comprehension of emotional facial expressions, by applying an emotion-by-emotion analysis to examine the emotional effect on N170 ERP variation. They analyzed the peak amplitude and latency variables in the temporal window of 120-180 ms. The ERP correlate showed a classic N170 ERP effect, more negative and more posteriorly distributed for morphed faces compared with normal faces. In addition, they found a lateralization effect, with a greater right-side distribution of the N170, but not directly correlated to the morphed or normal conditions. Two cognitive codes, structural and expression, are discussed, and the results are compared with the multilevel model proposed by V. Bruce and A. W. Young (1986, 1998).
... After that, an implication of these early effects of face processing could be drawn for theories and models of face perception. Based on the present status of research we would argue that the P100 component reflects the process of recognizing a face as a face, which should be before the structural encoding stage (probably reflected by the N170) proposed by a face processing model (Bruce and Young, 1998). ...
Article
Full-text available
According to current ERP literature, face specific activity is reflected by a negative component over the inferior occipito-temporal cortex between 140 and 180 ms after stimulus onset (N170). A recently published study (Liu et al., 2002) using magnetoencephalography (MEG) clearly indicated that a face-selective component can be observed at 100 ms (M100) which is about 70 ms earlier than reported in most previous studies. Here we report these early differences at 107 ms between the ERPs of faces and buildings over the occipito-temporal cortex using electroencephalography. To exclude contrast differences as the main factor for this P100 differences we replicated this study using pictures of faces and scrambled faces. Both studies indicated that face processing starts already at approximately 100 ms with an initial stage which can be measured not only with MEG but also with ERPs.
... Structural encoding of face information occurs around 170 ms after a face is encountered, and the retrieval of personal identity occurs later, around 350 ms. The different latencies of these ERP components have supported theories arguing that the structural analysis of the face must precede all other aspects of face processing, including social categorization (Bruce & Young, 1986). Despite the inclusion of social categorization in these models, little empirical research has examined it. ...
Article
Full-text available
The degree to which perceivers automatically attend to and encode social category information was investigated. Event-related brain potentials were used to assess attentional and working-memory processes on-line as participants were presented with pictures of Black and White males and females. The authors found that attention was preferentially directed to Black targets very early in processing (by about 100 ms after stimulus onset) in both experiments. Attention to gender also emerged early but occurred about 50 ms later than attention to race. Later working-memory processes were sensitive to more complex relations between the group memberships of a target individual and the surrounding social context. These working-memory processes were sensitive to both the explicit categorization task participants were performing as well as more implicit, task-irrelevant categorization dimensions. Results are consistent with models suggesting that information about certain category dimensions is encoded relatively automatically.
... Extant social psychological models have described how perceivers form high-level impressions of other people, whether they utilize category-based or individual-based information, and how knowledge about individuals and groups is learned, stored, and accessed (Bodenhausen & Macrae, 1998; Brewer, 1988; Chaiken & Trope, 1999; Fiske et al., 2002; Fiske & Neuberg, 1990; Higgins, 1996; Kunda & Thagard, 1996; Read & Miller, 1998b; Smith & DeCoster, 1998; Srull & Wyer, 1989; van Overwalle & Labiouse, 2004). Models in the cognitive face-processing literature, on the other hand, have described the visual and perceptual mechanisms that permit face recognition (Bruce & Young, 1986; Burton et al., 1990; Valentin, Abdi, O'Toole, & Cottrell, 1994). Our dynamic interactive model helps unify these two literatures by describing how the lower-level perceptual processing modeled in the cognitive literature works together with the higher-order social cognitive processes modeled in the social literature to give rise to person construal. ...
Article
A dynamic interactive theory of person construal is proposed. It assumes that the perception of other people is accomplished by a dynamical system involving continuous interaction between social categories, stereotypes, high-level cognitive states, and the low-level processing of facial, vocal, and bodily cues. This system permits lower-level sensory perception and higher-order social cognition to dynamically coordinate across multiple interactive levels of processing to give rise to stable person construals. A recurrent connectionist model of this system is described, which accounts for major findings on (a) partial parallel activation and dynamic competition in categorization and stereotyping, (b) top-down influences of high-level cognitive states and stereotype activations on categorization, (c) bottom-up category interactions due to shared perceptual features, and (d) contextual and cross-modal effects on categorization. The system's probabilistic and continuously evolving activation states permit multiple construals to be flexibly active in parallel. These activation states are also able to be tightly yoked to ongoing changes in external perceptual cues and to ongoing changes in high-level cognitive states. The implications of a rapidly adaptive, dynamic, and interactive person construal system are discussed.
... In the cognitive face-processing literature, non-interactive processing has even been argued formally. Such work has been largely guided by the cognitive architecture laid out in the influential Bruce and Young (1986) model of face perception, which proposed a dual processing route. Initially, a structural encoding mechanism constructs a representation of a face's features and configuration. ...
Article
Full-text available
Research is increasingly challenging the claim that distinct sources of social information-such as sex, race, and emotion-are processed in discrete fashion. Instead, there appear to be functionally relevant interactions that occur. In the present article, we describe research examining how cues conveyed by the human face, voice, and body interact to form the unified representations that guide our perceptions of and responses to other people. We explain how these information sources are often thrown into interaction through bottom-up forces (e.g., phenotypic cues) as well as top-down forces (e.g., stereotypes and prior knowledge). Such interactions point to a person perception process that is driven by an intimate interface between bottom-up perceptual and top-down social processes. Incorporating data from neuroimaging, event-related potentials (ERP), computational modeling, computer mouse-tracking, and other behavioral measures, we discuss the structure of this interface, and we consider its implications and adaptive purposes. We argue that an increased understanding of person perception will likely require a synthesis of insights and techniques, from social psychology to the cognitive, neural, and vision sciences.
... Understanding how others are feeling is an important tool for navigating the social world safely and effectively, and understanding which emotion categories perceivers can successfully "recognize" in others has been a central focus of the literature on emotion perception. Early, seminal models of face perception (Bruce & Young, 1986) emphasized a processing dissociation between static and dynamic facial cues, and since facial emotion is often categorized based on dynamic facial movements, it has largely been treated separately in the literature from dimensions of social perception that have categorical boundaries defined by static cues (e.g., race and sex). The particular neuroanatomical dissociation is between the fusiform gyrus (FG), thought to be more important in processing configurations of static facial cues, and the superior temporal sulcus (STS), known to be involved in processing dynamic facial actions as well as socially relevant actions more broadly, such as body movements (Haxby et al., 2000;. ...
Article
Full-text available
Across multiple domains of social perception - including social categorization, emotion perception, impression formation, and mentalizing - multivariate pattern analysis (MVPA) of fMRI data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.
... a specific visual mechanism responsible for the encoding of faces, and a " higher level " mechanism for associating the facial representation with semantic information about the emotion that the face represents, as previously suggested byFigure 5. Grand-average ERPs elicited by emotional facial expressions in supraliminal and subliminal condition. Bruce and Young (1998). On the contrary, no differences between the emotional and neutral conditions were found for P3. Thus, our results suggest an unspecific role of P3 for emotional face processing, although it could be a decisional response-related marker that is not dependent on the semantic content of the stimulus. In order to test this ERP significance ...
Article
Full-text available
In this study we analyze whether facial expression recognition is marked by specific event-related potential (ERP) correlates and whether conscious and unconscious elaboration of emotional facial stimuli are qualitatively different processes. ERPs elicited by supraliminal and subliminal (10ms) stimuli were recorded when subjects were viewing emotional facial expressions of four emotions or neutral stimuli. Two ERP effects (N2 and P3) were analyzed in terms of their peak amplitude and latency variations. An emotional specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). Unaware information processing proved to be quite similar to aware processing in terms of peak morphology but not of latency. A major result of this research was that unconscious stimulation produced a more delayed peak variation than conscious stimulation did. Also, a more posterior distribution of the ERP was found for N2 as a function of emotional content of the stimulus. On the contrary, cortical lateralization (right/left) was not correlated to conscious/unconscious stimulation. The functional significance of our results is underlined in terms of subliminal effect and emotion recognition. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... The emotional specificity of N2 is underlined by the major differences observed between emotional and neutral stimuli. In line with the Bruce and Young (1998) model, we postulate that a specific neural mechanism could exist for the processing of facial expressions of emotion. In addition, N2 can be considered a marker of the specific emotional content because it was observed to vary in amplitude as a function of type of emotion, and more specifically of motivational significance for the subject. ...
Article
Full-text available
Emotional face encoding process was explored through electroencephalographic measures (event-related potentials [ERPs]). Previous studies have demonstrated an emotion-specific cognitive process in face comprehension. However, the effect of emotional significance of the stimuli (type of emotion) and task (direct or indirect task) on the ERP is uncertain. In Experiment 1 (indirect task) ERP correlates of 21 subjects were recorded when they viewed emotional (anger, sadness and happiness) or neutral facial stimuli. An emotion-specific cortical variation was found, a negative deflection at approximately 200 ms after simulus (N2 effect). This effect was sensitive to the emotional valence of faces, because it differentiated high arousal emotions (i.e., anger) from low arousal emotions (i.e., sadness). Moreover, a specific cortical site (posterior) was activated by emotional faces but not by neutral faces. In Experiment 2 (direct task), the authors investigated whether encoding for emotional faces relies on a single neural system irrespective of the task, or whether it is supported by multiple, task-specific systems. Differences in the cortical distribution (posterior for incidental task; central and posterior for direct task) and lateralisation (right-distribution for the negative emotions in direct task) of N2 on the scalp were observed in the different tasks. This indicates that distinct task-specific cortical responses to emotional focus can be detected with ERP methodology.
Conference Paper
ERP responses were examined while subjects were identifying type of facial emotion as well as assessing intensity of facial emotion. We found a significant correlation between the magnitude of P100 response and the correct identification of type of facial emotion over the right posterior region and that between the magnitude of N170 response and the assessment of intensity of facial emotion over the right posterior and left frontal regions. Finding of these significant correlations from the same right occipital region suggested that the human brain processes information about facial emotion serially; type of facial emotion is processed first and thereafter its saliency or intensity level.
Article
Emotional face encoding processes in 2 types of tasks (direct and incidental) were explored in the current research through electroencephalographic (ERPs) and behavioral (response) measures. In Experiment 1 (incidental task) ERP correlates of 21 subjects were recorded when they viewed emotional (anger, sadness and happiness) or neutral facial stimuli. An emotion-specific cortical variation was found, a negative deflection at about 200 ms poststimulus (N2 effect). This effect was sensitive to the perceived emotional value of faces, since it differentiated negative high arousal (i.e., anger) from low arousal (i.e., sadness) or positive (happiness) emotions. Moreover, a specific cortical site (parietal) was activated by emotional faces but not by neutral faces. In Experiment 2 (20 subjects) a direct encoding task (emotion comprehension) was provided. We explored whether encoding for emotional faces relies on a single neural system irrespective of the task (incidental or direct), or whether it is supported by multiple, task-specific systems. The same difference previously observed between emotions, as a function of arousal and valence, was found in the direct condition. Nevertheless, we found differences in the cortical distribution (parietal for the incidental task; central and parietal for direct task) and lateralization (right-distribution for the negative emotions in the direct task) of the N200 on the scalp due to different types of task. The cognitive significance of these ERP variations is discussed.
Article
The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.
Article
Full-text available
La ricerca si propone di analizzare la presenza di uno specifico processo cognitivo per il "decoding" delle espressioni facciali delle emozioni mediante i correlati ERPs (potenziali evento-correlati). Nell'Esperimento 1 (condizione sovraliminale) a 20 soggetti sono state sottoposte cinque diverse espressioni emotive (rabbia, paura, sorpresa, gioia e tristezza) e un volto neutro (configurazione di baseline). Il riconoscimento emotivo ha indotto l'elicitazione di una deflessione negativa intorno ai 230 ms post-stimolo (effetto N2, finestra temporale 180-300), successiva alla variazione ERP N1, maggiormente localizzata nelle aree corticali posteriori (Pz). Tale effetto appare specifico per la mimica emotiva, distinto dall'indice strutturale elicitato da N1. La N2 presenta inoltre variazioni in relazione alla valutazione espressa sulle singole emozioni, in termini di valenza edonica e di "arousal" della configurazione. Inoltre, è stato osservata un'asimmetria corticale per alcune delle emozioni, con lateralizzazione destra per le espressioni negative (rabbia, paura e tristezza). Nell'Esperimento 2 (20 soggetti) è stata predisposta una condizione di stimolazione subliminale (tempi di esposizione di 10 ms), che impedisse l'elaborazione consapevole degli stimoli emotivi, al fine di analizzarne l'impatto sul piano cognitivo. Il profilo ERP per stimolazione subliminale appare morfologicamente simile a quello rilevato in condizione sovraliminale, ma con tempi di latenza maggiori (240 ms). Ciò fa ipotizzare una parziale somiglianza dei processi cognitivi sottostanti all'elaborazione sovraliminale e subliminale, seppure con variazioni temporali dovute alla condizione sperimentale.
Article
The facial expressions of emotion probably do not exclusively serve an emotional purpose, but instead can be related to different functions. In fact, a broad domain of information can be conveyed through facial displays. In our interactions with others, facial expressions enable us to communicate effectively, and they work in conjunction with spoken words as well as other, nonverbal acts. Among the expressive elements that contribute to the communication of emotion, facial expressions are considered as communicative signals [1]. In fact, facial expressions are central features of the social behavior of most nonhuman primates and they are powerful stimuli in human communication. Moreover, faces are crucial channels of social cognition, because of their high interpersonal value, and they permit the intentions of others to be deciphered. Thus, facial recognition may indirectly reflect the encoding and storage of social acts.
Chapter
Human is socially living creature that needs to communicate with others. In direct communication, there are two ways in conveying the information: through speaking words or verbally and through facial expression, body gesture or non-verbally. The non-verbal communication is taken almost 70 % of humans’ communication. Therefore, to understand how this non-verbal information is processed by the brain is quite important. In this chapter, we would like to elucidate the process of the brain in understanding the facial expression by analyzing the brain signals that correspond to emotional content of facial expression. As known, the emotion can be differentiated into the type and the level of emotion. For example, though we know that smiley face and joyful face belong to the same type of happiness, we know that the level of happiness is higher in the joyful face. Therefore, how does the brain process this kind of type and level of emotional information is the basic question that we would like to answer in this chapter. In this chapter, we explain the way we collect the data, the step-by-step process of reducing the noise in the brain signals, the way of inter-correlating the behavioral data and brain signals, how we used those data to find the location of activated brain area for processing the specific content of emotion, and finally, how we exactly find that the process of understanding the emotional information from facial expression is a sequential process; understanding the type of emotion, followed by the level of that specific emotion. This emotional process is different from that of the process of understanding the physical content of the face, like identity and gender.
Article
Full-text available
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Article
Previous studies have revealed that decoding of facial-expressions starts very early in the brain ( approximately 180 ms post-stimulus) and might be processed separately from the basic stage of face perception. In order to explore brain potentials (ERPs) related to decoding of facial-expressions and the effect of emotional valence of the stimulus, we analyzed 18 normal subjects. Faces with five basic emotional expressions (fear, anger, surprise, happiness, sadness) and neutral stimulus were presented in random order. The results demonstrated that an emotional face elicited a negative peak at approximately 230 ms (N230), distributed mainly over the posterior site for each emotion. The electrophysiological activity observed may represent specific cognitive processing underlying the decoding of emotional facial-expressions. Nevertheless, differences in peak amplitude were observed for high-arousal negative expressions compared with positive (happiness) and low-arousal expressions (sadness). N230 amplitude increased in response to anger, fear and surprise, suggesting that subjects' ERP variations are affected by experienced emotional intensity, related to arousal and unpleasant value of the stimulus.
Article
Full-text available
The effects of direct and averted gaze on autonomic arousal and gaze behavior in social anxiety were investigated using a new paradigm including animated movie stimuli and eye-tracking methodology. While high, medium, and low socially anxious (HSA vs. MSA vs. LSA) women watched animated movie clips, in which faces responded to the gaze of the participants with either direct or averted gaze, their eye movements, heart rate (HR) and skin conductance responses (SCR) were continuously recorded. Groups did not differ in their gaze behavior concerning direct vs. averted gaze, but high socially anxious women tended to fixate the eye region of the presented face longer than MSA and LSA, respectively. Furthermore, they responded to direct gaze with more pronounced cardiac acceleration. This physiological finding indicates that direct gaze may be a fear-relevant feature for socially anxious individuals in social interaction. However, this seems not to result in gaze avoidance. Future studies should examine the role of gaze direction and its interaction with facial expressions in social anxiety and its consequences for avoidance behavior and fear responses. Additionally, further research is needed to clarify the role of gaze perception in social anxiety.
ResearchGate has not been able to resolve any references for this publication.