Article

Selective visual and crossmodal impairment in the discrimination of anger and fear expressions in severe alcohol use disorder.

Authors:
  • Centre National de Ressources et de Résilience
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Severe alcohol use disorders (SAUD) are associated with a large variety of affective disturbances, among which a well-established decoding deficit for facial and vocal emotional expressions. This deficit has recently been found to be increased in cross-modal settings, namely when inputs from different sensory modalities have to be combined. Compared to unimodal emotional stimuli, cross-modal ones allow for faster and more accurate emotional predictions, and therefore constitute critical cues for social interactions. However, so far, studies exploring emotional cross-modal processing in SAUD relied on static faces, associated with voices from a different individual, largely hampering ecological validity. Besides, in real life conditions, emotions are often not fully expressed, so that we have to make guesses based on incomplete information. Accordingly, the aim of this study was to assess cross-modal emotional processing using a new ecological paradigm with dynamic audiovisual stimuli, manipulating the amount of emotional information available to the individual. Thirty individuals with SAUD and 30 matched healthy controls performed an emotional discrimination task requiring to identify emotions (anger, disgust, fear, happiness, sadness) expressed in short movies containing visual, auditory or auditory-visual information of various durations. The shortest excerpts revealed the very early emotional sketch (i.e., initial facial movements and prosody) while the longest ones depicted a more complete emotion. Sensitivity analyses (d’) showed that discrimination levels varied across sensory modalities and emotions, and increased with stimuli duration in both groups. Individuals with SAUD's performances improved from unimodal to cross-modal conditions, but their discrimination for cross-modal stimuli was impaired for anger and fear. This deficit was not influenced by the amount of information displayed, suggesting that it persists even when more emotional information is available. Results are discussed in light of the predictive mechanisms underlying emotion recognition, and converge with earlier findings to ascribe a specific role for anger and fear in SAUD.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Human vision combines inputs from the two eyes into one percept. Small differences "fuse" together, whereas larger differences are seen "rivalrously" from one eye at a time. These outcomes are typically treated as mutually exclusive processes, with paradigms targeting one or the other and fusion being unreported in most rivalry studies. Is fusion truly a default, stable state that only breaks into rivalry for non-fusible stimuli? Or are monocular and fused percepts three sub-states of one dynamical system? To determine whether fusion and rivalry are separate processes, we measured human perception of Gabor patches with a range of interocular orientation disparities. Observers (10 female, 5 male) reported rivalrous, fused, and uncertain percepts over time. We found a dynamic "tristable" zone spanning from ∼25-35° of orientation disparity where fused, left-eye-, or right-eye-dominant percepts could all occur. The temporal characteristics of fusion and non-fusion periods during tristability matched other bistable processes. We tested statistical models with fusion as a higher-level bistable process alternating with rivalry against our findings. None of these fit our data, but a simple bistable model extended to have three states reproduced many of our observations. We conclude that rivalry and fusion are multistable substates capable of direct competition, rather than separate bistable processes.SIGNIFICANCE STATEMENT When inputs to the two eyes differ, they can either fuse together or engage in binocular rivalry, where each eye's view is seen exclusively in turn. Visual stimuli have often been tailored to produce either fusion or rivalry, implicitly treating them as separate mutually-exclusive perceptual processes. We have found that some similar-but-different stimuli can result in both outcomes over time. Comparing various simple models with our results suggests that rivalry and fusion are not independent processes, but compete within a single multistable system. This conceptual shift is a step toward unifying fusion and rivalry, and understanding how they both contribute to the visual system's production of a unified interpretation of the conflicting images cast on the retina by real-world scenes.
Article
Full-text available
Background: A growing literature suggests deficient emotional facial expression (EFE) processing among recently abstinent individuals with alcohol use disorders (AUDs). Further investigation is needed to clarify valence-related discrepancies and elucidate neural and psychosocial correlates. We examined neurobehavioral indices of EFE processing and interpersonal problems in treatment seekers with AUDs and healthy community controls (CCs). Methods: Thirty-four individuals with AUDs and 39 CCs completed an emotion judgment task (EJT), requiring discrimination between happy, angry, and sad EFEs. A second task requiring discrimination of male and female faces with neutral expressions served as the control task (i.e., sex judgment task, SJT). Neurophysiological (i.e., N170 and P3) and behavioral measures were analyzed using generalized linear mixed models (GLMM). Interpersonal problems were assessed with the Inventory of Interpersonal Problems-64 (IIP-64). The relationship of IIP-64 and EJT performance was investigated via within-group correlations. Results: Analysis of the SJT revealed no group differences in behavioral measures, N170 amplitude, or P3 latency. P3 amplitudes, however, were significantly lower in the AUD group. For the EJT, initial observations of group differences in P3 amplitude were accounted for by differences in the control task. Behavioral analyses indicated that the AUD group was significantly less accurate than the CC group. Hypothesis-driven analyses using GLMM-estimated group differences indicated that anger processing was affected to a greater degree than were other emotions. Significant EJT/IIP-64 correlations were observed for anger processing within the AUD group and were confined to IIP-64 subscales with relatively high ratings on the affiliation dimension. Conclusions: Findings provide partial support for an emotion-specific processing deficit in persons with AUDs. Anger processing was more robustly affected than other emotions and was associated with interpersonal problems characterized by being overly needy, nonassertive, and overly accommodating. Results extend prior reports and reinforce the need for comprehensive study of emotion processing and its real-world implications.
Article
Full-text available
Background: Individuals in treatment for alcohol use disorder (AUD) display deficits across a broad range of cognitive processes. Disruptions in affective processing are understudied, but may be particularly important for interpersonal functioning and post-treatment adaptation. In particular, the role of sex in AUD-associated emotion processing deficits remains largely unaddressed and was a focus of the current investigation. Methods: Fifty-six treatment seekers with AUD and 54 healthy community controls (N = 110) were administered an emotional face discrimination task. Non-affective tasks included a sex-discrimination task and two brief measures of executive functioning. Two measures of interpersonal function were included. Results: Emotion processing deficits were evident among women with AUD relative to other groups. This sex-contingent relationship was not observed in measures of executive function, sex-discrimination or interpersonal problems, although individuals with AUD performed more poorly on these measures. Conclusions: Results were consistent with extant literatures examining cognitive, affective and interpersonal functioning among individuals with AUD, and provided novel evidence of vulnerability to alcohol-associated deficits in emotion processing among women. While similar sex-contingent effects were not apparent among other measures, results support modest interrelationships, specifically including the import of emotion processing to interpersonal functioning in AUD. These data offer guidance for further systematic investigation and highlight important considerations for future relapse-prevention and recovery-facilitation efforts.
Article
Full-text available
Background: Substance use disorders (SUDs) are associated with impairments of cognitive functions, and cognitive training programs are thus rapidly developing in SUD treatment. However, neuropsychological impairments observed early after withdrawal (i.e., early impairments), that is, approximately in the first six months, may be widespread. Consequently, it might not be possible to train all the identified early impairments. In these situations, we propose that the priority of cognitive training should be given to the early impairments found to be associated with early dropout or relapse (i.e., relapse-related impairments). However, substance-specific relapse-related impairments have not been singled out among all early impairments so far. Methods: Using a systematic literature search, we identified the types of established early impairments for all SUDs, and we assessed the extent to which these early impairments were found to be associated with relapse-related impairments. All cognitive functions were investigated according to a classification based on current neuropsychological models, distinguishing classical cognitive, substance-bias and social cognition systems. Results: According to the current evidence, demonstrated relapse-related impairments in alcohol use disorder comprised impulsivity, long-term memory, and higher-order executive functions. For cannabis use disorder, the identified relapse-related impairments were impulsivity and working memory. For stimulant use disorder, the identified relapse-related impairments were attentional abilities and higher-order executive functions. For opioid use disorder, the only identified relapse-related impairments were higher executive functions. However, many early impairments were not explored with respect to dropout/relapse, particularly for stimulant and opioid use disorders. Conclusions: The current literature reveals substance-specific relapse-related impairments, which supports a pragmatic patient-tailored approach for defining which early impairments should be prioritized in terms of training among patients with SUDs.
Article
Full-text available
The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.
Article
Full-text available
The present study investigated the influences of two different forms of reward presentation in modulating cognitive control. In three experiments, participants performed a flanker task for which one-third of trials were precued for a chance of obtaining a reward (reward trials). In Experiment 1, a reward was provided if participants made the correct response on reward trials, but a penalty was given if they made an incorrect response on these trials. The anticipation of this performance-contingent reward increased response speed and reduced the flanker effect, but had little influence on the sequential modulation of the flanker effect after incompatible trials. In Experiment 2, participants obtained a reward randomly on two-thirds of the precued reward trials and were given a penalty on the remaining one-third, regardless of their performance. The anticipation of this non-contingent reward had little influence on the overall response speed or flanker effect, but reduced the sequential modulation of the flanker effect after incompatible trials. Experiment 3 also used performance non-contingent rewards, but participants were randomly penalized more often than they were rewarded; non-contingent penalty had little influence on the sequential modulation of the flanker effect. None of the three experiments showed a reliable influence of the actual acquisition of rewards on task performance. These results indicate anticipatory effects of performance-contingent and non-contingent rewards on cognitive control with little evidence of aftereffects.
Article
Full-text available
Alcoholism is a complex and dynamic disease, punctuated by periods of abstinence and relapse, and influenced by a multitude of vulnerability factors. Chronic excessive alcohol consumption is associated with cognitive deficits, ranging from mild to severe, in executive functions, memory, and metacognitive abilities, with associated impairment in emotional processes and social cognition. These deficits can compromise efforts in initiating and sustaining abstinence by hampering efficacy of clinical treatment and can obstruct efforts in enabling good decision-making, success in interpersonal/social interactions, and awareness of cognitive and behavioral dysfunctions. Despite evidence for differences in recovery levels of selective cognitive processes, certain deficits can persist even with prolonged sobriety. Herein is presented a review of alcohol-related cognitive impairments affecting component processes of executive functioning, memory, and the recently investigated cognitive domains of metamemory, social cognition, and emotional processing; also considered are trajectories of cognitive recovery with abstinence. Finally, in the spirit of critical review, limitations of current knowledge are noted and avenues for new research efforts are proposed that focus on (1) the interaction among emotion-cognition processes and identification of vulnerability factors contributing to the development of emotional and social processing deficits and (2) the time line of cognitive recovery by tracking alcoholism's dynamic course of sobriety and relapse. Knowledge about the heterochronicity of cognitive recovery in alcoholism has the potential of indicating at which points during recovery intervention may be most beneficial. This article is protected by copyright. All rights reserved.
Article
Full-text available
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using the Kenward-Roger approximation for denominator degrees of freedom (based on the KRmodcomp function from the pbkrtest package). Some other convenient mixed model analysis tools such as a step method, that performs backward elimination of nonsignificant effects - both random and fixed, calculation of population means and multiple comparison tests together with plot facilities are provided by the package as well.
Article
Full-text available
A modified auditory n-back task was used to examine the ability of young and older listeners to remember the content of spoken messages presented from different locations. The messages were sentences from the Coordinative Response Measure (CRM) corpus, and the task was to judge whether a target word on the current trial was the same as in the most recent presentation from the same location (left, center, or right). The number of trials between comparison items (the number back) was varied while keeping the number of items to be held in memory (the number of locations) constant. Three levels of stimulus uncertainty were evaluated. Low- and high-uncertainty conditions were created by holding the talker (voice) and nontarget words constant, or varying them unpredictably across trials. In a medium-uncertainty condition, each location was associated with a specific talker, thus increasing predictability and ecological validity. Older listeners performed slightly worse than younger listeners, but there was no significant difference in response times (RT) for the two groups. An effect of the number back (n) was seen for both PC and RT; PC decreased steadily with n, while RT was fairly constant after a significant increase from n = 1 to n = 2. Apart from the lower PC for the older group, there was no effect involving age for either PC or RT. There was an effect of target word location (faster RTs with a late-occurring target) and an effect of uncertainty (faster RTs with a constant talker-location mapping, relative to the high-uncertainty condition). A similar pattern of performance was observed with a group of elderly hearing-impaired listeners (with and without shaping to ensure audibility), but RTs were substantially slower and the effect of uncertainty was absent. Apart from the observed overall slowing of RTs, these results provide little evidence for an effect of age-related changes in cognitive abilities on this task.
Article
Full-text available
Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top-down and bottom-up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top-down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation.
Article
Full-text available
Studies that have carried out experimental evaluation of emotional skills in alcohol-dependence have, up to now, been mainly focused on the exploration of emotional facial expressions (EFE) decoding. In the present paper, we provide some complements to the recent systematic literature review published by Donadon and de Lima Osório on this crucial topic. We also suggest research avenues that must be, in our opinion, considered in the com-ing years. More precisely, we propose, first, that a battery integrating a set of emotional tasks relating to different processes should be developed to better systemize EFE decoding measures in alcohol-dependence. Second, we propose to go below EFE recognition deficits and to seek for the roots of those alterations, particularly by investigating the putative role played by early visual processing and vision–emotion interactions in the emotional impairment observed in alcohol-dependence. Third, we insist on the need to go beyond EFE recognition deficits by suggesting that they only constitute a part of wider emotional deficits in alcohol-dependence. Importantly, since the efficient decoding of emotions is a crucial ability for the development and maintenance of satisfactory interpersonal relationships, we suggest that disruption of this ability in alcohol-dependent individuals may have adverse consequences for their social integration. One way to achieve this research agenda would be to develop the field of affective and social neuroscience of alcohol-dependence, which could ultimately lead to major advances at both theoretical and therapeutic levels.
Article
Full-text available
Background Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics’ recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time. Methods A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology. Results The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger. Conclusion The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed.
Article
Full-text available
We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration.
Chapter
Full-text available
Long-term chronic alcoholism is associated with disparate and widespread residual consequences for brain functioning and behavior, and alcoholics suffer a variety of cognitive deficiencies and emotional abnormalities. Alcoholism has heterogeneous origins and outcomes, depending upon factors such as family history, age, gender, and mental or physical health. Consequently, the neuropsychological profiles associated with alcoholism are not uniform among individuals. Moreover, within and across research studies, variability among participants is substantial and contributes to characteristics associated with differential treatment outcomes after detoxification. In order to refine our understanding of alcoholism-related impaired, spared, and recovered abilities, we focus on five specific functional domains: (1) memory, (2) executive functions, (3) emotion and psychosocial skills, (4) visuospatial cognition, and (5) psychomotor abilities. Although the entire brain might be vulnerable in uncomplicated alcoholism, the brain systems that are considered to be most at risk are the frontocerebellar and mesocorticolimbic circuitries. Over time, with abstinence from alcohol, the brain appears to become reorganized to provide compensation for structural and behavioral deficits. By relying on a combination of clinical and scientific approaches, future research will help to refine the compensatory roles of healthy brain systems, the degree to which abstinence and treatment facilitate the reversal of brain atrophy and dysfunction, and the importance of individual differences to outcome.
Article
Full-text available
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.
Article
Full-text available
Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.
Article
Full-text available
more than two decades ago, the timeline was developed as a procedure to aid recall of past drinking / that method, first referred to as the gathering of daily drinking disposition data and later labeled as the timeline follow-back (TLFB) method, is the focus of this chapter / TLFB appears to provide a relatively accurate portrayal of drinking and has both clinical and research utility administration of the TLFB technique / psychometric properties / test–retest reliability / subject-collateral comparisons / concurrent validity (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The Montreal Affective Voices consist of 90 nonverbal affect bursts corresponding to the emotions of anger, disgust, fear, pain, sadness, surprise, happiness, and pleasure (plus a neutral expression), recorded by 10 different actors (5 of them male and 5 female). Ratings of valence, arousal, and intensity for eight emotions were collected for each vocalization from 30 participants. Analyses revealed high recognition accuracies for most of the emotional categories (mean of 68%). They also revealed significant effects of both the actors' and the participants' gender: The highest hit rates (75%) were obtained for female participants rating female vocalizations, and the lowest hit rates (60%) for male participants rating male vocalizations. Interestingly, the mixed situations--that is, male participants rating female vocalizations or female participants rating male vocalizations--yielded similar, intermediate ratings. The Montreal Affective Voices are available for download at vnl.psy.gla.ac.uk/ (Resources section).
Article
Objective: To review deficits in emotional processing and social cognition potentially contributing to the dysfunctional emotion regulation and difficulties with interpersonal relationships observed in individuals with alcohol use disorder (AUD) and to provide directions for future research. Method: First is presented a review of emotional and social cognitive impairments in recently detoxified AUD individuals that include alexithymia, difficulties in decoding others’ emotions, and reduced theory of mind and empathy skills. Social cognition disorders in AUD pose different issues discussed, such as whether (1) these deficits are consequences of excessive alcohol consumption or premorbid risk factors for addiction, (2) emotional and social impairments impede positive treatment outcome, (3) recovery of social abilities is possible with sustained abstinence, and (4) AUD patients are unaware of their emotional and social dysfunctions. Finally, current knowledge on structural and functional brain correlates of these deficits in AUD are reviewed. Results: Emotional and social cognitive functions affected in AUD can potentially compromise efforts to initiate and maintain abstinence by hampering efficacy of clinical treatment. Such dysfunction can obstruct efforts to enable or re-instate higher-order abilities such as emotional self-regulation, motivation to change, success in interpersonal/social interactions, and emotional insight and awareness of social dysfunctions (i.e., accurate metacognition). Conclusions: The present review highlights the need to account for emotional processing and social cognition in the evaluation and rehabilitation of alcohol-related neurocognitive disorders and to consider psychotherapeutic treatment involving remediation of emotional and social skills as implemented in psychiatric and neurological disorders.
Article
F Humans seamlessly extract and integrate the emotional content delivered by the face and the voice of others. It is however poorly understood how perceptual decisions unfold in time when people discriminate the expression of emotions transmitted using dynamic facial and vocal signals, as in natural social context. In this study, we relied on a gating paradigm to track how the recognition of emotion expressions across the senses unfold over exposure time. We first demonstrate that across all emotions tested, a discriminatory decision is reached earlier with faces than with voices. Importantly, multisensory stimulation consistently reduced the required accumulation of perceptual evidences needed to reach correct discrimination (Isolation Point). We also observed that expressions with different emotional content provide cumulative evidence at different speeds, with “fear” being the expression with the fastest isolation point across the senses. Finally, the lack of correlation between the confusion patterns in response to facial and vocal signals across time suggest distinct relations between the discriminative features extracted from the two signals. Altogether, these results provide a comprehensive view on how auditory, visual and audiovisual information related to different emotion expressions accumulate in time, highlighting how multisensory context can fasten the discrimination process when minimal information is available.
Article
Objective: This study presents an updated meta-analysis replicating the study of (Stavro, K., Pelletier, J., & Potvin, S. (2013). Widespread and sustained cognitive deficits in alcoholism: A meta-analysis. Addiction Biology, 18, 203–213. doi:10.1111/j.1369-1600.2011.00418.x) regarding the cognitive functioning of alcoholics as a function of time abstinent. Methods: A total of 34 studies (including a total of 2,786 participants) that met pre-determined inclusion and exclusion criteria were included in the analyses. The alcoholics were categorized into recently detoxified alcoholics (0–31 days sober), alcoholics 32–365 days sober and alcoholics >365 days sober consistent with the previous study. The current study employed more stringent control on the tests included in the analysis to include only those tasks described in contemporary neuropsychological test compendia. Forty-seven percent of the papers surveyed were not include in the previous meta-analysis. Results: The results indicated that there was a diffuse and pervasive pattern of cognitive deficit among recently detoxified alcoholics and that these deficits, particularly with regard to memory functioning, persisted even in longer term abstinent alcoholics. This was inconsistent with the prior meta-analysis which contended that significant cognitive recovery was possible after as little as 1 year. Conclusion: The persisting cognitive deficits were noted across a wide range of cognitive functions, supporting the notion of a diffuse rather than a specific compromise of cognition in alcoholism following discontinuation, as measured using standardised neuropsychological tests. Limitations on the finding included the fact that it was a cross-sectional rather than a longitudinal analysis, was subject to heterogeneity of method, had low representation of females in the samples, and had fewer studies of long-term sober samples.
Article
Visuoperceptive impairments are among the most frequently reported deficits in alcohol-use disorders, but only very few studies have investigated their origin and interactions with other categories of dysfunctions. Besides, these deficits have generally been interpreted in a linear bottom-up perspective, which appears very restrictive with respect to the new models of vision developed in healthy populations. Indeed, new theories highlight the predictive nature of the visual system and demonstrate that it interacts with higher-level cognitive functions to generate top-down predictions. These models notably posit that a fast but coarse visual analysis involving magnocellular pathways helps to compute heuristic guesses regarding the identity and affective value of inputs, which are used to facilitate conscious visual recognition. Building on these new proposals, the present review stresses the need to reconsider visual deficits in alcohol-use disorders as they might have crucial significance for core features of the pathology, such as attentional bias, loss of inhibitory control and emotion decoding impairments. Centrally, we suggest that individuals with severe alcohol-use disorders could present with magnocellular damage and we defend a dynamic explanation of the deficits. Rather than being restricted to high-level processes, deficits could start at early visual stages and then extend and potentially intensify during following steps due to reduced cerebral connectivity and dysfunctional cognitive/emotional regions. A new research agenda is specifically provided to test these hypotheses.
Article
Background: Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset and on treatment completion. Methods: Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including three core components of empathy via paradigms measuring: 1) emotion recognition (the ability to recognize emotions via facial expression), 2) emotional perspective taking, and 3) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance, and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge, and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Results: Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results, and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger and no (neutral faces) emotion in patients with relapse/dropout. Conclusions: Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of treatment. Impaired facial emotion recognition represents a neurocognitive risk factor that should be taken into account in alcohol dependence treatment. Treatments targeting the improvement of these social cognition deficits in AUD may offer a promising future approach. This article is protected by copyright. All rights reserved.
Article
Introduction: Decoding emotional information from faces and voices is crucial for efficient interpersonal communication. Emotional decoding deficits have been found in alcohol-dependence (ALC), particularly in crossmodal situations (with simultaneous stimulations from different modalities), but are still underexplored in Korsakoff syndrome (KS). The aim of this study is to determine whether the continuity hypothesis, postulating a gradual worsening of cognitive and brain impairments from ALC to KS, is valid for emotional crossmodal processing. Methods: Sixteen KS, 17 ALC and 19 matched healthy controls (CP) had to detect the emotion (anger or happiness) displayed by auditory, visual or crossmodal auditory-visual stimuli. Crossmodal stimuli were either emotionally congruent (leading to a facilitation effect, i.e. enhanced performance for crossmodal condition compared to unimodal ones) or incongruent (leading to an interference effect, i.e. decreased performance for crossmodal condition due to discordant information across modalities). Reaction times and accuracy were recorded. Results: Crossmodal integration for congruent information was dampened only in ALC, while both ALC and KS demonstrated, compared to CP, decreased performance for decoding emotional facial expressions in the incongruent condition. Conclusions: The crossmodal integration appears impaired in ALC but preserved in KS. Both alcohol-related disorders present an increased interference effect. These results show the interest of more ecological designs, using crossmodal stimuli, to explore emotional decoding in alcohol-related disorders. They also suggest that the continuum hypothesis cannot be generalised to emotional decoding abilities.
Article
Attention Deficit / Hyperactivity Disorder (ADHD) and Chronic Tic Disorder (CTD) are two common and frequently co-existing disorders, probably following an additive model. But this is not yet clear for the basic sensory function of colour processing sensitive to dopaminergic functioning in the retina and higher cognitive functions like attention and interference control. The latter two reflect important aspects for psychoeducation and behavioural treatment approaches.Colour discrimination using the Farnsworth-Munsell 100-hue Test, sustained attention during the Frankfurt Attention Inventory (FAIR), and interference liability during Colour- and Counting-Stroop-Tests were assessed to further clarify the cognitive profile of the co-existence of ADHD and CTD. Altogether 69 children were classified into four groups: ADHD (N = 14), CTD (N = 20), ADHD+CTD (N = 20) and healthy Controls (N = 15) and compared in cognitive functioning in a 2×2-factorial statistical model.Difficulties with colour discrimination were associated with both ADHD and CTD factors following an additive model, but in ADHD these difficulties tended to be more pronounced on the blue-yellow axis. Attention problems were characteristic for ADHD but not CTD. Interference load was significant in both Colour- and Counting-Stroop-Tests and unrelated to colour discrimination. Compared to Controls, interference load in the Colour-Stroop was higher in pure ADHD and in pure CTD, but not in ADHD+CTD, following a sub-additive model. In contrast, interference load in the Counting-Stroop did not reveal ADHD or CTD effects.The co-existence of ADHD and CTD is characterized by additive as well as sub-additive performance impairments, suggesting that their co-existence may show simple additive characteristics of both disorders or a more complex interaction, depending on demand. The equivocal findings on interference control may indicate limited validity of the Stroop-Paradigm for clinical assessments.
Article
Background and aims: Deficits in social cognitive abilities including emotion recognition and theory of mind (ToM) can play a significant role in interpersonal difficulties observed in alcohol use disorder (AUD). This meta-analysis aims to estimate mean effect sizes of deficits in social cognition in AUD and examines the effects of demographic and clinical confounding factors on the variability of effect sizes across studies. Methods: A literature review was conducted on research reports published from January 1990 to January 2016. Twenty-five studies investigating ToM and facial emotion recognition performances of 756 individuals with AUD and 681 healthy controls were selected after applying inclusion and exclusion criteria. Weighted effect sizes (d) were calculated for ToM, decoding and reasoning aspects of ToM, total facial emotion recognition and recognition of each of six basic emotions. Results: Facial emotion recognition was significantly impaired (d = 0.65, 95% confidence interval (CI) = 0.42-0.89), particularly for disgust and anger. AUD was also associated with deficits in ToM (d = 0.58, 95% CI = 0.36-0.81). These deficits were evident in tasks measuring both decoding (d = 0.46, 95% CI = 0.19-0.73) and reasoning (d = 0.72, 95% CI = 0.37-1.06) aspects of ToM. The longer duration of alcohol misuse and more depressive symptoms were associated with more severe deficits in recognition of facial emotions. Conclusions: Alcohol use disorder appears to be associated with significant impairment in facial emotion recognition and theory of mind.
Article
Ecologists and evolutionary biologists are relying on an increasingly sophisticated set of statistical tools to describe complex natural systems. One such tool that has gained increasing traction in the life sciences is structural equation modeling (SEM), a variant of path analysis that resolves complex multivariate relationships among a suite of interrelated variables. SEM has historically relied on covariances among variables, rather than the values of the data points themselves. While this approach permits a wide variety of model forms, it limits the incorporation of detailed specifications. Here, I present a fully-documented, open-source R package piecewiseSEM that builds on the base R syntax for all current generalized linear, least-square, and mixed effects models. I also provide two worked examples: one involving a hierarchical dataset with non-normally distributed variables, and a second involving phylogenetically-independent contrasts. My goal is to provide a user-friendly and tractable implementation of SEM that also reflects the ecological and methodological processes generating data.
Article
Emotion perception naturally entails multisensory integration. It is also assumed that multisensory emotion perception is characterized by enhanced activation of brain areas implied in multisensory integration, such as the superior temporal gyrus and sulcus (STG/STS). However, most previous studies have employed designs and stimuli that preclude other forms of multisensory interaction, such as crossmodal prediction, leaving open the question whether classical integration is the only relevant process in multisensory emotion perception. Here, we used video clips containing emotional and neutral body and vocal expressions to investigate the role of crossmodal prediction in multisensory emotion perception. While emotional multisensory expressions increased activation in the bilateral fusiform gyrus (FFG), neutral expressions compared to emotional ones enhanced activation in the bilateral middle temporal gyrus (MTG) and posterior STS. Hence, while neutral stimuli activate classical multisensory areas, emotional stimuli invoke areas linked to unisensory visual processing. Emotional stimuli may therefore trigger a prediction of upcoming auditory information based on prior visual information. Such prediction may be stronger for highly salient emotional compared to less salient neutral information. Therefore, we suggest that multisensory emotion perception involves at least two distinct mechanisms; classical multisensory integration, as shown for neutral expressions, and crossmodal prediction, as evident for emotional expressions.
Article
Background: Attentional biases and deficits play a central role in the development and maintenance of alcohol dependence, but the underlying attentional processes accounting for these deficits have been very little explored. Importantly, the differential alterations across the 3 attentional networks (alerting, orienting, and executive control) remain unclear in this pathology. Methods: Thirty recently detoxified alcohol-dependent individuals and 30 paired controls completed the Attention Network Test, which allow exploring the attentional alterations specifically related to the 3 attentional networks. Results: Alcohol-dependent individuals presented globally delayed reaction times compared to controls. More centrally, they showed a differential deficit across attention networks, with a preserved performance for alerting and orienting networks but impaired executive control (p < 0.001). This deficit was not related to psychopathological comorbidities but was positively correlated with the duration of alcohol-dependence habits, the number of previous detoxification treatments and the mean alcohol consumption before detoxification. Conclusions: These results suggest that attentional alterations in alcohol dependence are centrally due to a specific alteration of executive control. Intervention programs focusing on executive components of attention should be promoted, and these results support the frontal lobe hypothesis.
Article
Crossmodal processing (i.e., the construction of a unified representation stemming from distinct sensorial modalities inputs) constitutes a crucial ability in humans’ everyday life. It has been extensively explored at cognitive and cerebral levels during the last decade among healthy controls. Paradoxically however, and while difficulties to perform this integrative process have been suggested in a large range of psychopathological states (e.g., schizophrenia and autism), these crossmodal paradigms have been very rarely used in the exploration of psychiatric populations. The main aim of the present paper is thus to underline the experimental and clinical usefulness of exploring crossmodal processes in psychiatry. We will illustrate this proposal by means of the recent data obtained in the crossmodal exploration of emotional alterations in alcohol-dependence. Indeed, emotional decoding impairments might have a role in the development and maintenance of alcohol-dependence, and have been extensively investigated by means of experiments using separated visual or auditory stimulations. Besides these unimodal explorations, we have recently conducted several studies using audio-visual crossmodal paradigms, which has allowed us to improve the ecological validity of the unimodal experimental designs and to offer new insights on the emotional alterations among alcohol-dependent individuals. We will show how these preliminary results can be extended to develop a coherent and ambitious research program using crossmodal designs in various psychiatric populations and sensory modalities. We will finally end the paper by underlining the various potential clinical applications and the fundamental implications that can be raised by this emerging project.
Article
Describes a new instrument, the Inventory of Interpersonal Problems (IIP), which measures distress arising from interpersonal sources. The IIP meets the need for an easily administered self-report inventory that describes the types of interpersonal problems that people experience and the level of distress associated with them before, during, and after psychotherapy. In Study 1, psychometric data are presented for 103 patients who were tested at the beginning and end of a waiting period before they began brief dynamic psychotherapy. On both occasions, a factor analysis yielded the same six subscales; these scales showed high internal consistency and high test–retest reliability. Study 2 demonstrated the instrument's sensitivity to clinical change. In this study, a subset of patients was tested before, during, and after 20 sessions of psychotherapy. Their improvement on the IIP agreed well with all other measures of their improvement, including those generated by the therapist and by an independent evaluator. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This paper is concerned with the human observer's behavior in detecting light signals in a uniform light background. Detection of these signals depends on information transmitted to cortical centers by way of the visual pathways. An analysis is made of the form of this information, and the types of decisions which can be based on information of this form. Based on this analysis, the expected form of data collected in "yes-no' and "forced-choice' psychophysical experiments is defined, and experiments demonstrating the internal consistency of the theory are presented.
Article
To test the generalized emotional decoding impairment hypothesis in alcoholism. Cross-sectional behavioural study comparing emotion recognition conveyed by faces, voices and musical excerpts. Alcohol detoxification unit of Brugmann University Hospital. Twenty-five recently detoxified alcohol-dependent patients were compared to 25 normal controls matched for sex, age and educational level. From faces, voices and musical excerpts, participants were instructed to rate the intensity of several emotions on a scale from 0 for ‘absent’ to 9 for ‘highly present’. Depression, anxiety and sustained/selective attention capacities were controlled for. Alcohol-dependent patients were less accurate than controls in identifying the target emotion in faces (P < 0.001), voices (P < 0.001) and musical excerpts (P < 0.001). Alcohol-dependent patients who are completing detoxification are impaired in recognizing emotions conveyed by faces, voices and music; these results suggest a generalized emotional decoding impairment. Hypothetically, deficits in the fronto-parietal mirror neurone system could link all these disturbances together.
Article
The present study investigated emotional facial expression decoding in alcoholics. Twenty-five alcoholic patients at the end of the detoxification process were compared with 25 volunteers matched for age, sex, and education. They were presented with facial expressions of neutral, mild, moderate, or strong emotional intensity. Results indicate that alcoholics overestimate the intensity of emotional expressions and make more errors in their decoding with a special bias for anger and contempt. Moreover, this decoding deficit is not perceived by the alcoholic patients. A general model is proposed that links visuospatial deficits, abnormal processing of social information, interpersonal stress, and alcohol abuse.
Article
Words varying in length (one, two, and three syllables) and in frequency (high and low) were presented to subjects in isolation, in a short context, and in a long context. Each word was presented repeatedly, and its presentation time (duration from the onset of the word) increased at each successive pass. After each pass, subjects were asked to write down the word being presented and to indicate how confident they were about each guess. In addition to replicating a frequency, a context, and a word-length effect, this “gating” paradigm allowed us to study more closely the narrowing-in process employed by listeners in the isolation and recognition of words: Some delay appears to exist between the moment a word is isolated from other word candidates and the moment it is recognized; word candidates differ in number and in type from one context to the other; and, like syntactic processing, word recognition is strewn with garden paths. The active direct access model proposed by Marslen-Wilson and Welsh is discussed in light of these findings.
Article
Alcohol-dependence is associated with cognitive and biological alterations, and also with interpersonal impairments. Although overwhelming in clinical settings and involved in relapse, these social impairments have received little attention from researchers. Particularly, brain alterations related to social exclusion have not been explored in alcohol-dependence. Our primary purpose was to determine the neural correlates of social exclusion feelings in this population. In all, 44 participants (22 abstinent alcohol-dependent patients and 22 paired controls) played a virtual game ('cyberball') during fMRI recording. They were first included by other players, then excluded, and finally re-included. Brain areas involved in social exclusion were identified and the functional connectivity between these areas was explored using psycho-physiological interactions (PPI). Results showed that while both groups presented dorsal anterior cingulate cortex (dACC) activations during social exclusion, alcohol-dependent participants exhibited increased insula and reduced frontal activations (in ventrolateral prefrontal cortex) as compared with controls. Alcohol-dependence was also associated with persistent dACC and parahippocampal gyrus activations in re-inclusion. PPI analyses showed reduced frontocingulate connectivity during social exclusion in alcohol-dependence. Alcohol-dependence is thus linked with increased activation in areas eliciting social exclusion feelings (dACC-insula), and with impaired ability to inhibit these feelings (indexed by reduced frontal activations). Altered frontal regulation thus appears implied in the interpersonal alterations observed in alcohol-dependence, which seem reinforced by impaired frontocingulate connectivity. This first exploration of the neural correlates of interpersonal problems in alcohol-dependence could initiate the development of a social neuroscience of addictive states.
Article
Emotional facial expression (EFE) decoding skills have been shown to be impaired in recovering alcoholics (RA). The aim of the present study is to replicate these results and to explore whether these abnormalities are specific to alcoholism using two control groups: non-patient controls (NC) and patients with obsessive-compulsive disorder (OC). Twenty-two alcoholic patients at the end of their detoxification process (RA) were compared to 22 OC and 22 NC matched for age, sex and education level. They were presented with 12 photographs of facial expressions portraying different emotions: happiness; anger; and fear. Each emotion was displayed with mild (30%) and moderate (70%) intensity levels. Each EFE was judged on 8 scales labeled happiness, sadness, fear, anger, disgust, surprise, shame and contempt. For each scale, subjects rated the estimated intensity level. RA were less accurate in EFE decoding than OC and NC, particularly for anger and happiness expressions. RA overestimated the emotional intensity for mild intensity level expressions compared with both OC and NC while no significant differences emerged for moderate intensity level expressions. Deficits in EFE decoding skills seem to be specific to RA when compared with OC. Comparison with other psychopathological groups is still needed. Possible consequences of EFE decoding deficits in RA include distorted interpersonal relationships.
Article
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).
Article
The cognitive repercussions of alcohol dependence are well documented. However, the literature remains somewhat ambiguous with respect to which distinct cognitive functions are more susceptible to impairment in alcoholism and to how duration of abstinence affects cognitive recovery. Some theories claim alcohol negatively affects specific cognitive functions, while others assert that deficits are more diffuse in nature. This is the first meta-analysis to examine cognition in alcohol abuse/dependence and the duration of abstinence necessary to achieve cognitive recovery. A literature search yielded 62 studies that assessed cognitive dysfunction among alcoholics. Effect size estimates were calculated using the Comprehensive Meta-Analysis V2, for the following 12 cognitive domains: intelligence quotient, verbal fluency/language, speed of processing, working memory, attention, problem solving/executive functions, inhibition/impulsivity, verbal learning, verbal memory, visual learning, visual memory and visuospatial abilities. Within these 12 domains, three effect size estimates were calculated based on abstinence duration. The three groups were partitioned into short- (<1 month), intermediate- (2 to 12 months) and long- (>1 year) term abstinence. Findings revealed moderate impairment across 11 cognitive domains during short-term abstinence, with moderate impairment across 10 domains during intermediate term abstinence. Small effect size estimates were found for long-term abstinence. These results suggest significant impairment across multiple cognitive functions remains stable during the first year of abstinence from alcohol. Generally, dysfunction abates by 1 year of sobriety. These findings support the diffuse brain hypothesis and suggest that cognitive dysfunction may linger for up to an average of 1 year post-detoxification from alcohol.
Article
Emotion in daily life is often expressed in a multimodal fashion. Consequently emotional information from one modality can influence processing in another. In a previous fMRI study we assessed the neural correlates of audio-visual integration and found that activity in the left amygdala is significantly attenuated when a neutral stimulus is paired with an emotional one compared to conditions where emotional stimuli were present in both channels. Here we used dynamic causal modelling to investigate the effective connectivity in the neuronal network underlying this emotion presence congruence effect. Our results provided strong evidence in favor of a model family, differing only in the interhemispheric interactions. All winning models share a connection from the bilateral fusiform gyrus (FFG) into the left amygdala and a non-linear modulatory influence of bilateral posterior superior temporal sulcus (pSTS) on these connections. This result indicates that the pSTS not only integrates multi-modal information from visual and auditory regions (as reflected in our model by significant feed-forward connections) but also gates the influence of the sensory information on the left amygdala, leading to attenuation of amygdala activity when a neutral stimulus is integrated. Moreover, we found a significant lateralization of the FFG due to stronger driving input by the stimuli (faces) into the right hemisphere, whereas such lateralization was not present for sound-driven input into the superior temporal gyrus. In summary, our data provides further evidence for a rightward lateralization of the FFG and in particular for a key role of the pSTS in the integration and gating of audio-visual emotional information.
Article
Increasing evidence suggests that the visual representations of different emotional facial expressions overlap. Here we used an adaptation paradigm to investigate overlap of anger, disgust and fear expressions. In Experiment 1, participants categorized faces morphed from neutral to anger or neutral to disgust after adaptation to expressions of anger, disgust, and fear. Adaptation to expressions of both anger and disgust was found to bias perception of anger expressions away from anger. For disgust expressions, adaptation to disgust biased perception away from disgust, whereas fear adaptation biased perception towards disgust. Adaptation to anger had no measurable effect. In Experiment 2, covering the mouth-region of the disgust adaptation face was found to severely diminish the effect of disgust adaptation on perception of anger targets whereas covering the nose- or eye-region had no effect. In Experiment 3, adaptation to anger had a substantial effect on perception of anger targets when the mouth-region of the anger face was covered; indicating that the results of Experiment 2 are not an artefact of the stimuli and procedures used. These results indicate that the visual representations of anger, disgust and fear expressions overlap to a considerable degree. Furthermore, the nature of this overlap appears related to the communicative functions of these expressions.