ArticleLiterature Review
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Within affective science, the central line of inquiry, animated by basic emotion theory and constructivist accounts, has been the search for one-to-one mappings between six emotions and their subjective experiences, prototypical expressions, and underlying brain states. We offer an alternative perspective: semantic space theory. This computational approach uses wide-ranging naturalistic stimuli and open-ended statistical techniques to capture systematic variation in emotion-related behaviors. Upwards of 25 distinct varieties of emotional experience have distinct profiles of associated antecedents and expressions. These emotions are high-dimensional, categorical, and often blended. This approach also reveals that specific emotions, more than valence, organize emotional experience, expression, and neural processing. Overall, moving beyond traditional models to study broader semantic spaces of emotion can enrich our understanding of human experience.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This variability highlights a fundamental challenge in SER: the subjectivity of emotion perception. Such disagreements underscore the complexity of accurately identifying emotions from speech, as individuals' interpretations of emotional content can vary widely based on their experiences, biases, and cultural backgrounds [5,3,6]. However, the standard approaches regard the disagreement as noise and use the majority rule (MR) or plurality rule (PR) to find the consensus labels. ...
... assessing the consistency between a model's predicted distribution and subjective annotations is an effective method for evaluating if an SER model aligns with human emotional perception. Furthermore, the work of psychologists [6,31] supports the idea that emotion perception is not only high-dimensional but also blended in nature. For the evaluation phase, inspired by [32], we employ a threshold technique [33,34,35] to transform the distribution label into a binary vector, which serves as the basis for emotion decision-making for each sample-similar to a multi-hot encoding scheme. ...
... For example, the IEMOCAP corpora use MR for constructing the ground-truth labels [18,36], discarding approximately 31.37% of the data and 49.44% of the ratings shown in Table 3. However, reallife emotional states can co-occur in many situations (e.g., sad and angry) [31,6,37]. Previous studies discarded many data points in the test set since they assumed each sample had only one emotional category. Then, the ground-truth category does not reflect secondary emotions that are also conveyed in the utterance. ...
Conference Paper
Full-text available
Speech Emotion Recognition (SER) faces a distinct challenge compared to other speech-related tasks because the annotations will show the subjective emotional perceptions of different annotators. Previous SER studies often view the subjectivity of emotion perception as noise by using the majority rule or plurality rule to obtain the consensus labels. However, these standard approaches overlook the valuable information of labels that do not agree with the consensus and make it easier for the test set. Emotion perception can have co-occurring emotions in realistic conditions, and it is unnecessary to regard the disagreement between raters as noise. To bridge the SER into a multi-label task, we introduced an "all-inclusive rule," which considers all available data, ratings, and distributional labels as multi-label targets and a complete test set. We demonstrated that models trained with multi-label targets generated by the proposed AR outperform conventional single-label methods across incomplete and complete test sets.
... Another tradition has focused on a small set of emotions, such as "joy", "sadness", "anger", "fear", and "interest", providing key insights on experiences of basic emotions [22][23][24] . However, the space of emotional experience is now known to be much richer (see 25 for a review). Advanced computational techniques applied to the study of facial-bodily expression, vocal bursts, prosody, the feelings evoked by music and video, and brain patterning triggered by emotional videos provide convergent evidence for at least eighteen distinct states found across all modalities and media, in addition to modality-or media-specific states, such as pride and shame in the face/ body and the feeling of dreaminess in response to music 14,[26][27][28][29][30][31] . ...
... Guided by Semantic Space Theory, here we take a data-driven approach to mapping aesthetic experience 25 , departing significantly from the concept-driven methods of past studies. Semantic Space theory offers a computational approach that uses wide-ranging naturalistic stimuli and open-ended statistical techniques to capture systematic variation in emotion-related behavior across modalities (face, body, voice, neurophysiology, language 25 ). ...
... Guided by Semantic Space Theory, here we take a data-driven approach to mapping aesthetic experience 25 , departing significantly from the concept-driven methods of past studies. Semantic Space theory offers a computational approach that uses wide-ranging naturalistic stimuli and open-ended statistical techniques to capture systematic variation in emotion-related behavior across modalities (face, body, voice, neurophysiology, language 25 ). New quantitative approaches to partitioning variance enable the examination of three properties of semantic space of experience: dimensionality, distribution, and conceptualization. ...
Article
Full-text available
Despite the evolutionary history and cultural significance of visual art, the structure of aesthetic experiences it evokes has only attracted recent scientific attention. What kinds of experience does visual art evoke? Guided by Semantic Space Theory, we identify the concepts that most precisely describe people’s aesthetic experiences using new computational techniques. Participants viewed 1457 artworks sampled from diverse cultural and historical traditions and reported on the emotions they felt and their perceived artwork qualities. Results show that aesthetic experiences are high-dimensional, comprising 25 categories of feeling states. Extending well beyond hedonism and broad evaluative judgments (e.g., pleasant/unpleasant), aesthetic experiences involve emotions of daily social living (e.g., “sad”, “joy”), the imagination (e.g., “psychedelic”, “mysterious”), profundity (e.g., “disgust”, “awe”), and perceptual qualities attributed to the artwork (e.g., “whimsical”, “disorienting”). Aesthetic emotions and perceptual qualities jointly predict viewers’ liking of the artworks, indicating that we conceptualize aesthetic experiences in terms of the emotions we feel but also the qualities we perceive in the artwork. Aesthetic experiences are often mixed and lie along continuous gradients between categories rather than within discrete clusters. Our collection of artworks is visualized within an interactive map (https://barradeau.com/2021/emotions-map/), revealing the high-dimensional space of aesthetic experiences associated with visual art.
... Different psychological theories have debated the nature of emotion (11)(12)(13)(14)(15)(16)(17)(18)(19)(20). Most current theories have converged on three insights: (i) emotions arise from interpreting and evaluating what is happening (e.g., the situation); (ii) emotions have variability within and between cultures and individuals; and (iii) humans share some universal aspects of emotions (11,12,14). ...
... More recently, semantic space theory moves beyond and incorporates these rival emotions theories to describe dozens of emotions such as confusion, amusement, admiration, awe, sexual desire, and nostalgia (17)(18)(19)(20). The creators of semantic space theory not only went beyond the existing paradigms to better understand the nature of emotions as potentially distinct clusters of emotional experiences, appraisals, behaviors, and influences but also acknowledged these clusters as simultaneously contextualized, variable, and recognizable within and across cultures (19). ...
... More recently, semantic space theory moves beyond and incorporates these rival emotions theories to describe dozens of emotions such as confusion, amusement, admiration, awe, sexual desire, and nostalgia (17)(18)(19)(20). The creators of semantic space theory not only went beyond the existing paradigms to better understand the nature of emotions as potentially distinct clusters of emotional experiences, appraisals, behaviors, and influences but also acknowledged these clusters as simultaneously contextualized, variable, and recognizable within and across cultures (19). They brought advanced statistical techniques and multiple large corpora (e.g., of vocalization, nonverbal facial and body expression, and feelings evoked by videos and music) to ask whether there are distinct boundaries between emotions, how many clusters of emotions there are, and whether emotional experiences correspond to specific emotions or valence and arousal (19). ...
Article
Full-text available
While emotional content predicts social media post sharing, competing theories of emotion imply different predictions about how emotional content will influence the virality of social media posts. We tested and compared these theoretical frameworks. Teams of annotators assessed more than 4000 multimedia posts from Polish and Lithuanian Facebook for more than 20 emotions. We found that, drawing on semantic space theory, modeling discrete emotions independently was superior to models examining valence (positive or negative), activation/ arousal (high or low), or clusters of emotions and was on par with but had more explanatory power than a seven basic emotion model. Certain discrete emotions were associated with post sharing, including both positive and negative and relatively lower and higher activation/arousal emotions (e.g., amusement, cute/kama muta, anger, and sadness) even when controlling for number of followers, time up, topic, and Facebook angry reactions. These results provide key insights into better understanding of social media post virality.
... Emotions are complex and are systematically blended together (Cowen & Keltner, 2021). Thus, we extracted overarching latent emotion components with a principal component analysis (PCA). ...
... There is some evidence for "knowledge-seeking" emotions because confusion loads onto four of these components and interest loads onto three components. However, the emotions here do not appear to be categorized into four discreet groups but gather into a large high-dimensional semantic space where emotion instances are systematically blended together, consistently with previous observations (Cowen & Keltner, 2021). Indeed, a similar amount of latent emotion components was extracted compared to previous observations-that is, 26 components against the Cowen & Keltner's (2017) 27 categories-thus suggesting that art-elicited emotional instances are just as multivariate as emotions that are evoked from short videos (Cowen & Keltner, 2017) or music excerpts (Cowen et al., 2020). ...
Article
Full-text available
Formal visual features of artworks are acknowledged as objective influences on aesthetic appreciation. However, aesthetic appeal can also be influenced by more subjective features, such as the artwork-elicited emotional instances or individual cultural background and aesthetic experience. Here, we relied on a large-scale statistical inference and data-driven approach to discover any interplay between formal features and art-elicited emotions on aesthetic appeal. In total, 1,190 paintings were examined for their formal features and rated on their specific induced emotional instances and aesthetic appreciation by 408 participants using an online self-report study. The effect of formal features and emotional instances on predicting aesthetic appreciation was determined by comparing different models on their Akaike information criterion and explained variance. Only a moderate agreement was found on aesthetic appreciation and artwork-elicited emotional instances across participants. Emotion-related and subjective measures were better at predicting aesthetic appreciation than formal features; a model that accounts for all variables is most predictive of aesthetic appeal toward artworks. In particular, when extracting overarching latent emotion components, five of these components could predict aesthetic appreciation significantly. This data-driven approach indicated that aesthetic appreciation is jointly but differentially influenced by formal features and emotion-related measures, likely based on a multitude of individual experiences that crucially unravel the essence of aesthetic appreciation.
... datasets. However, the synthesized data is conditioned on predefined text and discrete emotion classes, which lack naturalness, fail to capture nuanced paralinguistic cues such as hesitations [24] and vocal burst [7], and overlook the possibility of multiple, complex emotions co-occurring within a single speech instance [11], [14]. Moreover, it depends on Azure TTS, which supports only English, restricting its applicability to other languages. ...
... 316.6M 1024 XLS-R 300M [5] Multi. 315.4M 1024 a Expressivity encoder in Seamless Expressive Additionally, given the availability of soft labels-a probability distribution derived from the voting results of multiple annotators and adjusted using a smoothing technique [37]-we leverage these labels to account for the multidimensional nature of human emotions [14]. This forms the basis of our second selection criterion, which we adopt as the default approach. ...
Preprint
Full-text available
Speech Emotion Recognition (SER) is a crucial component in developing general-purpose AI agents capable of natural human-computer interaction. However, building robust multilingual SER systems remains challenging due to the scarcity of labeled data in languages other than English and Chinese. In this paper, we propose an approach to enhance SER performance in low SER resource languages by leveraging data from high-resource languages. Specifically, we employ expressive Speech-to-Speech translation (S2ST) combined with a novel bootstrapping data selection pipeline to generate labeled data in the target language. Extensive experiments demonstrate that our method is both effective and generalizable across different upstream models and languages. Our results suggest that this approach can facilitate the development of more scalable and robust multilingual SER systems.
... Human emotions have personal differences owing to various environmental factors. It is difficult to classify emotions accurately because various emotions are mixed and high dimensional [35,50]. Cowen et al. [50] argued that emotions are high-dimensional, they contain more than 25 different types, and each emotion has a unique pattern profile for the relevant response. ...
... It is difficult to classify emotions accurately because various emotions are mixed and high dimensional [35,50]. Cowen et al. [50] argued that emotions are high-dimensional, they contain more than 25 different types, and each emotion has a unique pattern profile for the relevant response. Therefore, the limitations of existing emotion dimension model can be resolved by analyzing the correlation between valence and arousal for each emotion and then applying various methods, including surveys. ...
Article
Full-text available
As virtual reality (VR) technology advances, research has focused on enhancing VR content for a more realistic user experience. Traditional emotion analysis relies on surveys, but they suffer from delayed responses and decreased immersion, leading to distorted results. To overcome these limitations, we propose an emotion analysis method using sensor data in the VR environment. Our approach can take advantage of the user’s immediate response and not reduce immersion. Linear regression, classification analysis, and tree-based methods were applied to electrocardiogram and galvanic skin response (GSR) sensor data to measure valence and arousal values. We introduced a novel emotional dimension model by analyzing correlations between emotions and the valence and arousal values. Experimental results demonstrated the highest accuracy of 77% and 92.3% for valence and arousal prediction, respectively, using GSR sensor data. Furthermore, an accuracy of 80.25% was achieved in predicting valence and arousal using nine emotions. Our proposed model improves VR content through more accurate emotion analysis in a VR environment, which can be useful for targeting customers in various industries, such as marketing, gaming, education, and healthcare.
... Therefore, grouping all positive emotions under a single or limited number of terms (Scherer, 1986) is misleading. More generally, the emotional space in which we communicate is richer than previously studied (Cowen & Keltner, 2021;Keltner, 2019;Keltner et al., 2019), which supports the need for diversification from the six basic emotions studied by Ekman in the 1970s (Ekman & Friesen, 1971; for an extensive description, see Ekman, 1992Ekman, , 1999. For the study of emotional prosody, such findings encourage researchers to further extend the usual number of emotional states or dimensions (e.g., 14 in Banse & Scherer, 1996;12 in Cowen et al., 2019) to reach a more realistic view of the range of emotions communicated through prosody. ...
... 2. The strength of the relation between the physical signal and the expression or perception of a specific emotion depends on the theoretical framework; for example, there is a strong acoustic-emotion relation in affect program theories but a more flexible relation in appraisal and constructivist ones. Interestingly, there is empirical evidence supporting both a straightforward mapping (e.g., association between roughness of screams and the expression of fear; see Arnal et al., 2015; for a review on neural response patterning, see Cowen & Keltner, 2021) and a more complex one (see Barrett, 2017; for an example in the visual domain, see Barrett et al., 2019; for a discussion on universality, see Gendron et al., 2018). ...
Article
Full-text available
Emotional voices attract considerable attention. A search on any browser using “emotional prosody” as a key phrase leads to more than a million entries. Such interest is evident in the scientific literature as well; readers are reminded in the introductory paragraphs of countless articles of the great importance of prosody and that listeners easily infer the emotional state of speakers through acoustic information. However, despite decades of research on this topic and important achievements, the mapping between acoustics and emotional states is still unclear. In this article, we chart the rich literature on emotional prosody for both newcomers to the field and researchers seeking updates. We also summarize problems revealed by a sample of the literature of the last decades and propose concrete research directions for addressing them, ultimately to satisfy the need for more mechanistic knowledge of emotional prosody.
... SSET categorises emotions, such as joy, anger, interest, admiration, and empathic pain, among others, in terms of systematic variation in emotion-related responses. The theory characterises a semantic space using three variables: (1) its dimensionality, (2) conceptualisation of implications in terms of intentions, appraisals, or mental states, and (3) the distribution of experiences within the space (Cowen & Keltner, 2021). ...
... Disagreement does not necessarily need to be removed, and non-consensus samples represent real-world emotion perception. Additionally, conveyed and perceived emotions are not always discrete, as found by psychological studies [11], [12], [27]. In other words, co-occurrence of emotions can occur in a single speech. ...
Conference Paper
Full-text available
Speech Emotion Recognition (SER) is a crucial component in human-computer interaction (HCI). Previous SER research often utilizes Automatic Speech Recognition (ASR) systems to improve SER performance. Some studies have investigated the possibility of modeling multi-task learning for ASR and SER. However, prior studies and our preliminary experiments show that conflicts arise between ASR and SER when standard multi-task learning is used. We adopt a two-stage training strategy with weighted losses to overcome the conflicts between SER and ASR tasks. We utilize the public Whisper model, which supports ASR tasks and other functionalities, and add a small, lightweight adapter on Whisper for the SER task, referred to as Whisper-SER. We first fine-tune the model on the ASR task and then fine-tune it using a weighted loss strategy, balancing the losses between the ASR and multi-label SER tasks during training. The proposed method allows Whisper-SER to recognize emotions and transcribe speech without degrading ASR and SER performance within the same encoder and decoder.
... Furthermore, we investigated if any difference between the 'fear' and 'no fear' groups arises in the communication between the ventral visual stream and other brain areas. In fact, multiple theoretical perspectives suggest that subjective experience may be linked to a broad system of brain regions lying outside the ventral visual stream [4,5,7,[13][14][15][16]. Such regions include the amygdala [13,17], the hippocampus, the anterior cingulate cortex, the insula and various subregions of the prefrontal cortex [18]. ...
... In this work, we have six emotions in total (threshold = 1/6), so the testing label of the audio-visual scenario (1,1,0,0,0,0) is different from others (1,1,0,0,0,1) after applying the threshold method introduced in [10]. We also follow [17] to allow the samples to have more than one emotion to reflect the nature of emotion perception that could involve mixed emotions from the psychology perspective [18]. ...
Preprint
Full-text available
Speech Emotion Recognition (SER) systems rely on speech input and emotional labels annotated by humans. However, various emotion databases collect perceptional evaluations in different ways. For instance, the IEMOCAP dataset uses video clips with sounds for annotators to provide their emotional perceptions. However, the most significant English emotion dataset, the MSP-PODCAST, only provides speech for raters to choose the emotional ratings. Nevertheless, using speech as input is the standard approach to training SER systems. Therefore, the open question is the emotional labels elicited by which scenarios are the most effective for training SER systems. We comprehensively compare the effectiveness of SER systems trained with labels elicited by different modality stimuli and evaluate the SER systems on various testing conditions. Also, we introduce an all-inclusive label that combines all labels elicited by various modalities. We show that using labels elicited by voice-only stimuli for training yields better performance on the test set, whereas labels elicited by voice-only stimuli.
... Inspired by Semantics Space Theory [30], we follow Chou et al. [31] to gather numerous annotations and compute a distribution-like (soft label) representation, aiming to capture the high-dimensional nature of emotion perception more accurately. Here is one example: Let's assume we gather five annotations from five distinct raters for a single sample. ...
Conference Paper
Full-text available
Speech emotion recognition (SER) is an essential technology for human-computer interaction systems. However, the previous study reveals that 80.77% of SER papers yield results that cannot be reproduced on the well-known IEMOCAP dataset. The main reason for reproducibility challenges is that the database did not provide standard data splits (e.g., train, development, and test sets). Prior papers could define its partition, but they did not provide details of the partition or source code for processing the partition. Therefore, this work aims to make SER open and reproducible to everyone. We develop the EMO-SUPERB, shorted for EMOtion Speech Universal PERformance Benchmark, including a user-friendly codebase to leverage 16 state-of-the-art (SOTA) speech self-supervised learning models for exhaustive evaluation plus one SOTA SER model across 6 open-source SER datasets in English and Chinese. We make all resources open-source to facilitate future developments in SER. Researchers can easily upload their systems or datasets to EMO-SUPERB, and we name the project "Open-Emotion".
... We use the same downstream model architecture as the SER task in the S3PRL toolkit [18], using three Conv1d, a self-attention pooling, and two linear layers. To capture the high-dimensional nature of emotion [32], we formulate emotion recognition as a multilabel classification problem. We first transform the emotion annotations to emotion distribution by frequency and then apply label smoothing by [33] using a smoothing parameter of 0.05 to obtain a soft label. ...
... One approach to account for the variability in emotional experience is semantic space models (Cowen & Keltner, 2021), which uses computational methods to capture systematic variation in emotional behaviors and experiences. The dimensionality of a semantic space refers to the number of distinct varieties of emotion represented within a response modality, which essentially captures the cultural conceptualization and subjectivity of emotional experiences and expressions. ...
Article
People’s preferences for the utilitarian outcome in sacrificial moral dilemmas, where a larger group of individuals are saved at the cost of a few, have been argued to be influenced by various factors. Taking expected utility (EU) theory into consideration, we investigate whether the expected effectiveness of actions elucidate certain inconsistencies in moral judgments. Additionally, we also explore whether participants’ role in the dilemma as the executor or a superior who merely makes a decision, which is carried out by a subordinate, influences judgments—a factor generally overlooked by classical EU models. We test these hypotheses using a modified moral dilemma paradigm with a choice between two actions, one highly successful and the other more likely to fail. Both actions are either expected to result in a favorable outcome of saving five individuals by sacrificing one or an unfavorable outcome of sacrificing five to save one. When the efficient action is anticipated to lead to a favorable outcome, in line with EU models, people almost invariably choose the efficient action. However, in conditions where the EUs associated with efficient and inefficient actions are close to each other, people’s choice for favored outcome is above chance when they act as agents themselves. We discuss the implications of our results for existing theories of moral judgments.
... This focus on mundane nature settings raises the question of whether greater restorative benefits could be achieved through exposure to awe-evoking kinds of nature. Awe is often characterized as a distinct positive emotional state (Cowen & Keltner, 2021) that involves feelings of wonder and amazement and directs individuals' attention away from the self towards their surrounding environment (Keltner & Haidt, 2003). Feelings of awe typically arise in response to vast natural environments such as expansive mountain ranges or majestic canyons. ...
Article
Exposure to nature can enhance mental well‐being, making nature‐based interventions promising for the treatment and prevention of mental health problems like depression. Given the decreased self‐focus and sense of self‐diminishment associated with awe, the present study investigated the impact of exposure to awe‐evoking nature on two key risk and maintenance factors of depression—repetitive negative thinking (RNT) and dampening of positive feelings—and on subjective happiness. In a randomized controlled trial, we tested the effects of exposure to awe‐evoking nature clips through a 1‐week intervention, consisting of watching a 1‐min clip on a daily basis of either awe‐evoking ( n high awe = 108) or more mundane nature scenes ( n low awe = 105). Before, immediately after (post‐intervention) and 1 week after the intervention (follow‐up), participants completed self‐report scales probing RNT, dampening, and subjective happiness. Results indicated significant decreases in these outcomes at post‐intervention and follow‐up in both groups. We discuss study limitations, touch upon future research ideas, and reflect upon the role of nature for clinical applications.
... One approach to account for the variability in emotional experience is semantic space models (Cowen & Keltner, 2021), which uses computational methods to capture systematic variation in emotional behaviors and experiences. The dimensionality of a semantic space refers to the number of distinct varieties of emotion represented within a response modality, which essentially captures the cultural conceptualization and subjectivity of emotional experiences and expressions. ...
Conference Paper
Full-text available
Emotion space models are frameworks that represent emotions in a multidimensional space, providing a structured way to understand and analyze the complex landscape of human emotions. However, the dimensional representation of emotions is still debatable. In this work, we are probing the higher dimensional space constituted by emotion labeling done by participants from India on multimedia stimuli. Our approach formalizes the study of emotion in the investigation of represen-tational state spaces capturing semantic variation in emotion-related response (including experience and expression, as well as associated physiology, cognition, and motivation). We have created a high-dimensional space of emotional ratings by participants to represent emotional stimuli. Using a prominent dimensional reduction technique, t-distribution-based stochastic neighbour embedding (t-SNE), we have projected the higher dimensional space into two dimensions. We observed that the structure of emotional categories and clusters formed of these emotional categories is similar to Russell's circumplex model. The transition from the blended complex emotional states to the discrete emotional states is projected out from the centre, and discrete emotional states occur in the periphery. Using advanced visualization and multidimensional technique, we show a continuity of emotional experiences to complement the existing knowledge based on V-A space with the information on how the transitions among emotion categories occur.
... Furthermore, we investigated if any difference between the 'fear' and 'no fear' groups arises in the communication between the ventral visual stream and other brain areas. In fact, multiple theoretical perspectives suggest that subjective experience may be linked to a broad system of brain regions lying outside the ventral visual stream [4,5,7,[13][14][15][16]. Such regions include the amygdala [13,17], the hippocampus, the anterior cingulate cortex, the insula and various subregions of the prefrontal cortex [18]. ...
Article
Full-text available
It has been reported that threatening and non-threatening visual stimuli can be distinguished based on the multi-voxel patterns of haemodynamic activity in the human ventral visual stream. Do these findings mean that there may be evolutionarily hardwired mechanisms within early perception, for the fast and automatic detection of threat, and maybe even for the generation of the subjective experience of fear? In this human neuroimaging study, we presented participants ('fear' group: N = 30; 'no fear' group: N = 30) with 2700 images of animals that could trigger subjective fear or not as a function of the individual's idiosyncratic ‘fear profiles’ (i.e. fear ratings of animals reported by a given participant). We provide evidence that the ventral visual stream may represent affectively neutral visual features that are statistically associated with fear ratings of participants, without representing the subjective experience of fear itself. More specifically, we show that patterns of haemodynamic activity predictive of a specific ‘fear profile’ can be observed in the ventral visual stream whether a participant reports being afraid of the stimuli or not. Further, we found that the multivariate information synchronization between ventral visual areas and prefrontal regions distinguished participants who reported being subjectively afraid of the stimuli from those who did not. Together, these findings support the view that the subjective experience of fear may depend on the relevant visual information triggering implicit metacognitive mechanisms in the prefrontal cortex. This article is part of the theme issue ‘Sensing and feeling: an integrative approach to sensory processing and emotional experience’.
... Approaches to emotion annotation are predominantly based on the theory of universal emotions by Ekman [11], including Plutchik's wheel of emotions [41], and SenticNet [5], although recent studies have shown promise in expanding these models [8]. Studies of emotion in literary texts face challenges that inhere to emotion annotation, including the volatility and overlap of emotions as it is a task where there are large disagreements even between human annotators [38], with a lack of ground truth due to the subjective nature of emotions. ...
Conference Paper
This paper presents the outcomes of a study that leverages emotion annotation to investigate the narrative dynamics in novels. We use two lexicon-based models, VADER sentiment annotation and a novel annotation of 8 primary NRC emotions, comparing them in terms of overlaps and assessing the dynamics of the sentiment and emotional arcs resulting from these two approaches. Our results indicate that whereas the simple valence annotation does not capture the intricate nature of narrative emotions, the two types of narrative pro昀椀ling exhibit evident correlations. Additionally, we manually annotate selected emotion arcs to comprehensively analyse the resource.
... We use the same downstream model architecture as the SER task in the S3PRL toolkit [18], using three Conv1d, a self-attention pooling, and two linear layers. To capture the high-dimensional nature of emotion [32], we formulate emotion recognition as a multilabel classification problem. We first transform the emotion annotations to emotion distribution by frequency and then apply label smoothing by [33] using a smoothing parameter of 0.05 to obtain a soft label. ...
Preprint
Full-text available
The rapid growth of Speech Emotion Recognition (SER) has diverse global applications, from improving human-computer interactions to aiding mental health diagnostics. However, SER models might contain social bias toward gender, leading to unfair outcomes. This study analyzes gender bias in SER models trained with Self-Supervised Learning (SSL) at scale, exploring factors influencing it. SSL-based SER models are chosen for their cutting-edge performance. Our research pioneering research gender bias in SER from both upstream model and data perspectives. Our findings reveal that females exhibit slightly higher overall SER performance than males. Modified CPC and XLS-R, two well-known SSL models, notably exhibit significant bias. Moreover, models trained with Mandarin datasets display a pronounced bias toward valence. Lastly, we find that gender-wise emotion distribution differences in training data significantly affect gender bias, while upstream model representation has a limited impact.
... As the popularity of English has increased, the ability to teach English has gradually improved and students' English standards have also improved significantly [1]. Writing occupies an important place in the English learning process, reflecting students' ability to use language and express themselves in writing. ...
Article
Full-text available
Nowadays, major enterprises and schools vigorously promote the combination of information technology and subject teaching, among which automatic grading technology is more widely used. In order to improve the efficiency of English composition correction, the study proposes an unsupervised semantic space model for English composition tangent, using a Hierarchical Topic Tree Hybrid Semantic Space to achieve topic representation and clustering in English composition; adopts a feature dimensionality reduction method to select a set of semantic features to complete the optimization of the feature semantic space; and combines the tangent analysis algorithm to achieve intelligent scoring of English composition. The experimental data show that the accuracy and F-value of the English composition tangent analysis method based on the semantic space are significantly improved, and the Pearson correlation coefficient between the unsupervised semantic space English composition tangent model and the teacher’s manual grading is 0.8936. The results show that the unsupervised semantic space English composition tangent model has a higher accuracy rate, is more applicable, and can efficiently complete the English composition grading task: essay review task.
... The second misconception is that love is an emotion (similar to fear, anger, sadness, surprise, disgust, and joy, for example). Lay people typically consider love to be an emotion [20,21] and so do some scientists [22,23]. Although it depends on how emotions are defined, there are several reasons to assume that love is not an emotion. ...
Article
Full-text available
Scientific research on romantic love has been relatively sparse but is becoming more prevalent, as it should. Unfortunately, several misconceptions about romantic love are becoming entrenched in the popular media and/or the scientific community, which hampers progress. Therefore, I refute six misconceptions about romantic love in this article. I explain why (1) romantic love is not necessarily dyadic, social, or interpersonal, (2) love is not an emotion, (3) romantic love does not just have positive effects, (4) romantic love is not uncontrollable, (5) there is no dedicated love brain region, neurotransmitter, or hormone, and (6) pharmacological manipulation of romantic love is not near. To increase progress in our scientific understanding of romantic love, I recommend that we study the intrapersonal aspects of romantic love including the intensity of love, that we focus our research questions and designs using a component process model of romantic love, and that we distinguish hypotheses and suggestions from empirical findings when citing previous work.
... The results demonstrate that establishing distinct self-states through verbal instructions tied to spatial positions leads to diverse bodily expressions along affective dimensions. These findings could serve as inspiration for ongoing endeavors aimed at integrating three classes of emotion theories: basic emotion theories, constructionist theories, and appraisal theories [46,47]. The methodology employed in this study offers a valuable approach to investigate emotions within a framework that incorporates various psychological components, such as semantic concepts, appraisals, and bodily expressions. ...
Article
Full-text available
The concept of self-states is a recurring theme in various psychotherapeutic and counseling methodologies. However, the predominantly unconscious nature of these self-states presents two challenges. Firstly, it renders the process of working with them susceptible to biases and therapeutic suggestions. Secondly, there is skepticism regarding the observability and differentiation of self-states beyond subjective experiences. In this study, we demonstrate the feasibility of eliciting self-states from clients and objectively distinguishing these evoked self-states through the lens of neutral observers. The self-state constellation method, utilized as an embodied approach, facilitated the activation of diverse self-states. External observers then assessed the nonverbal manifestations of affect along three primary dimensions: emotional valence, arousal, and dominance. Our findings indicate that external observers could reliably discern and differentiate individual self-states based on the bodily displayed valence and dominance. However, the ability to distinguish states based on displayed arousal was not evident. Importantly, this distinctiveness of various self-states was not limited to specific individuals but extended across the entire recording sample. Therefore, within the framework of the self-state constellation method, it is evident that individual self-states can be intentionally evoked, and these states can be objectively differentiated beyond the subjective experiences of the client.
... While amusement, love, joy, fear, and contempt are the predominant emotions across our stimuli, fear is more challenging to experience solely through the auditory modality, while contempt is more easily elicited in the audio version of the movie. These results are consistent with previous studies demonstrating distinct emotion taxonomies associated with different sensory modalities to convey affective states (82). ...
Article
Full-text available
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality affects how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
... The data were recorded at home via the speakers' microphones. The full Hume-Prosody dataset consists of 48 dimensions of emotional expression and is based on the semantic-space model for emotion [8]. For this Sub-Challenge, nine emotional classes have been selected due to their more balanced distribution across the valence-arousal space: 'Anger', 'Boredom', 'Calmness', 'Concentration', 'Determination', 'Excitement', 'Interest', 'Sadness', and 'Tiredness'. ...
... Second, there is an approach arguing for dimensionality of all emotional experience; all emotions vary at least according to the degree of valence (positive / negative) and activity (Russell, 1980;Russell and Barrett, 1999). Third, it is also possible that emotions are experienced in clusters (e.g., a variety of happiness emotions might be clustered with joy while anxiety, fear, and horror are eventually clustered together) (e.g., Cowen et al., 2019;Cowen and Keltner, 2021). Despite the differences between these groups of theories, there is a common although sometimes implicitly shared underlying principle that emotions have some dimensional structure both in case of basic emotions (which can be structured according to valence and activity) and also emotion clusters (which contain some dimensional structure as well). ...
... It is crucial to understand 'how' it has been said. Classifying a speech segment as one type of emotion is oversimplifying the way humans perceive language [6]. On many occasions under various contexts, different groups of people perceive the same emotion differently. ...
Preprint
Full-text available
Human emotion understanding is pivotal in making conversational technology mainstream. We view speech emotion understanding as a perception task which is a more realistic setting. With varying contexts (languages, demographics, etc.) different share of people perceive the same speech segment as a non-unanimous emotion. As part of the ACM Multimedia 2023 Computational Paralinguistics ChallengE (ComParE) in the EMotion Share track, we leverage their rich dataset of multilingual speakers and multi-label regression target of 'emotion share' or perception of that emotion. We demonstrate that the training scheme of different foundation models dictates their effectiveness for tasks beyond speech recognition, especially for non-semantic speech tasks like emotion understanding. This is a very complex task due to multilingual speakers, variability in the target labels, and inherent imbalance in the regression dataset. Our results show that HuBERT-Large with a self-attention-based light-weight sequence model provides 4.6% improvement over the reported baseline.
Article
Full-text available
Humans no doubt use language to communicate about their emotional experiences, but does language in turn help humans understand emotions, or is language just a vehicle of communication? This study used a form of artificial intelligence (AI) known as large language models (LLMs) to assess whether language-based representations of emotion causally contribute to the AI’s ability to generate inferences about the emotional meaning of novel situations. Fourteen attributes of human emotion concept representation were found to be represented by the LLM's distinct artificial neuron populations. By manipulating these attribute-related neurons, we in turn demonstrated the role of emotion concept knowledge in generative emotion inference. The attribute-specific performance deterioration was related to the importance of different attributes in human mental space. Our findings provide a proof-in-concept that even a LLM can learn about emotions in the absence of sensory-motor representations and highlight the contribution of language-derived emotion-concept knowledge for emotion inference.
Conference Paper
Full-text available
Training speech emotion recognition (SER) requires human-annotated labels and speech data. However, emotion perception is complex. The pre-defined emotion categories are not enough for annotators to describe their emotion perception. Devoted annotators will use natural language rather than traditional emotion labels when annotating data, resulting in typed descriptions (e.g., ``Slightly Angry, calm'' to notify the intensity of emotion). While these descriptions are highly valuable, SER models, designed as classification models, cannot process natural languages and thus discard them. To leverage the valuable typed descriptions, we propose a novel way to prompt ChatGPT to mimic annotators, comprehend natural language typed descriptions, and subsequently adjust the given label of the input data. By utilizing labels generated by ChatGPT, we consistently achieve an average relative gain of 3.08% across all settings using 15 speech self-surprised learning models on the SUPERB, which provides a potential way to integrate the power of LLMs to improve the performances of SER.
Article
Full-text available
What does it mean to feel good? Is our experience of gazing in awe at a majestic mountain fundamentally different than erupting with triumph when our favorite team wins the championship? Here, we use a semantic space approach to test which positive emotional experiences are distinct from each other based on in-depth personal narratives of experiences involving 22 positive emotions (n = 165; 3,592 emotional events). A bottom-up computational analysis was applied to the transcribed text, with unsupervised clustering employed to maximize internal granular consistency (i.e., the clusters being maximally different and maximally internally homogeneous). The analysis yielded four emotions that map onto distinct clusters of subjective experiences: amusement, interest, lust, and tenderness. The application of the semantic space approach to in-depth personal accounts yields a nuanced understanding of positive emotional experiences. Moreover, this analytical method allows for the bottom-up development of emotion taxonomies, showcasing its potential for broader applications in the study of subjective experiences.
Article
Full-text available
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants’ self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies—many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Article
Full-text available
Depictions of sadness are commonplace, and here we aimed to discover and catalogue the complex and nuanced ways that people interpret sad facial expressions. We used a rigorous qualitative methodology to build a thematic framework from 3,243 open-ended responses from 41 people who participated in 2020 and described what they thought sad expressors in 80 images were thinking, feeling, and/or intending to do. Face images were sourced from a novel set of naturalistic expressions (ANU Real Facial Expression Database), as well as a traditional posed expression database (Radboud Faces Database). The resultant framework revealed clear themes around the expressors’ thoughts (e.g., acceptance, contemplation, disbelief), social needs (e.g., social support or withdrawal), social behaviours/intentions (e.g., mock or manipulate), and the precipitating events (e.g., social or romantic conflict). Expressions that were perceived as genuine were more frequently described as thinking deeply, reflecting, or feeling regretful, whereas those perceived as posed were more frequently described as exaggerated, overamplified, or dramatised. Overall, findings highlight that facial expressions — even with high levels of consensus about the emotion category they belong to — are interpreted in nuanced and complex ways that emphasise their role as other-oriented social tools, and convey semantically related emotion categories that share smooth gradients with one another. Our novel thematic framework also provides an important foundation for future work aimed at understanding variation in the social functions of sadness, including exploring potential differences in interpretations across cultural settings.
Article
Full-text available
Visual sexual stimuli (VSS) are often used to induce affective responses in experimental research, but can also be useful in the assessment and treatment of sexual disorders (e.g., sexual arousal dysfunctions, paraphilic disorders, compulsive sexual behaviors). This systematic literature review of standardized sets containing VSS was conducted by searching electronic databases (PsycINFO, PubMed, Scopus, Web of Science) from January 1999 to December 2022 for specific keywords [("picture set" OR "picture database" OR "video set" OR "video database" OR "visual set" OR "visual database") AND ("erotic stimuli" OR "sexual stimuli" OR "explicit erotic stimuli" OR "explicit sexual stimuli")]. Selected sets were narratively summarized according to VSS (modality, duration, explicitness, shown sexes, sexual practices, physical properties, emotion models, affective ratings) and participants’ characteristics (gender, sexual orientation and sexual preferences, cultural and ethnic diversity). Among the 20 sets included, researchers can select from ~ 1,390 VSS (85.6% images, 14.4% videos). Most sets contain VSS of opposite- and some of same-sex couples, but rarely display diverse sexual practices. Although sexual orientation and preferences strongly influence the evaluation of VSS, little consideration of both factors has been given. There was little representation of historically underrepresented cultural and ethnic groups. Therefore, our review suggests limitations and room for improvement related to the representation of gender, sexual orientation, sexual preferences, and especially cultural and ethnic diversity. Perceived shortcomings in experimental research using VSS are highlighted, and recommendations are discussed for representative stimuli for conducting and evaluating sexual affective responses in laboratory and clinical contexts while increasing the replicability of such findings.
Article
Full-text available
Emoticons and facial emojis are ubiquitous in contemporary digital communication, where it has been proposed that they make up for the lack of social information from real faces. In this paper, I construe them as cultural artifacts that exploit the neurocognitive mechanisms for face perception. Building on a step-by-step comparison of psychological evidence on the perception of faces vis-à-vis the perception of emoticons/emojis, I assess to what extent they do effectively vicariate real faces with respect to the following four domains: (1) the expression of emotions, (2) the cultural norms for expressing emotions, (3) conveying non-affective social information, and (4) attention prioritization.
Article
Despite technical progress, automatic systems aimed at “decoding” a subject’s affective states based on objective measures, such as patterns of facial movements or neural activity, are undermined by intricate epistemological and theoretical issues. Most of these systems rely on some principles from Paul Ekman’s research on emotion and his taxonomy of the “Canonical Six” emotion categories. However, there is a growing consensus in affective science that these principles and categories require updating or even rejection. In this chapter, I illustrate some of these issues and discuss the risks that they may lead to mis-decoding affective states.
Article
Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries. Using a mimicry paradigm, 5,833 participants formed facial expressions found in 4,659 naturalistic images, resulting in 423,193 participant-generated facial expressions. In their own language, participants also rated each expression in terms of 48 emotions and mental states. A deep neural network tasked with predicting the culture-specific meanings people attributed to facial movements while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression, with 21 dimensions showing strong evidence of universality and the remainder showing varying degrees of cultural specificity. These results capture the underlying dimensions of the meanings of facial expressions within and across cultures in unprecedented detail.
Article
Full-text available
We examined the role of educator perceptions of school leader emotion regulation (ER) and emotional support (ES) in educator well-being during a typical year and during the COVID-19 pandemic. Based on emotion contagion theory, leaders’ (in)ability to regulate their own emotions may trigger ripple effects of positive or negative emotions throughout their organizations, impacting staff well-being. Additionally, based on conservation of resources theory, when experiencing psychologically taxing events, skillful emotional support provided by leaders may help to replenish staff’s depleted psychological resources, promoting staff well-being. In two national studies, a cross-sectional (NStudy 1 = 4,847) and a two-wave study (NStudy 2 = 2,749), we tested the association between United States preK-12 educator perceptions of school leaders’ ER and ES with educator well-being before and during the COVID-19 pandemic, employing structural equation modeling and multilevel modeling. In Studies 1 and 2, educator reports of their leaders’ ER and ES skills predicted greater educator well-being, including higher positive affect and job satisfaction and lower emotional exhaustion and turnover intentions. In moderation analyses, perceived leader ER predicted well-being about equally among educators facing severe versus mild health impacts from COVID-19. In contrast, perceived leader ES was more strongly associated with educator well-being for some outcomes in those severely versus mildly impacted by COVID-19 illness and death. Leader ER played a role in the well-being of everyone, whereas leader ES was more predictive of well-being for those severely impacted by a crisis. Regarding implications for policy and practice, efforts to promote well-being among educators may be enhanced when combined with efforts to develop school leaders’ ER and ES skills, especially in times of crisis. Accordingly, school districts should consider the value of investing in systematic, evidence-based emotion skills training for their leaders.
Article
Full-text available
3D animators commonly employ facial expressions to convey emotions, yet this method has limitations in fostering audience immersion. Existing guidelines prioritize storytelling, offering limited insight into character construction for immersive experiences. Our investigation seeks to enhance the lifelike movement of animated characters, focusing on audience engagement at specific points. This paper presents empirical findings highlighting the importance of facial and body movements in authentically portraying animated characters’ emotions. Drawing on Shapiro’s 15 controllers for character animation, we conducted an empirical study, examining distinct elements associated with each emotion. Data collection via Likert-scale assessments determined the average agreement for each controller concerning specific emotions. Our results indicate that varied emotions demand unique controllers for optimal realism. Although facial and gaze controllers are integral to all emotions, their intensity differs across emotional states. In response, we propose a preliminary model rooted in basic emotions, offering guidance to animators crafting realistic 3D characters. This model addresses the nuanced requirements of diverse emotions, providing a valuable resource for those seeking to enhance the authenticity of animated character expressions.
Conference Paper
Full-text available
Establishing a psychologically safe work environment is crucial for leading a positive and practical agile retrospective. Emotions are closely intertwined concepts that come under the roof of psychology. Capturing them at the right time helps to detect harmful or favourable online behaviours, hinder or facilitate the software development cycle, and moralize or demoralize the team in a software company. This study aims to identify emotions that appear during the online agile retrospective. Our study asks the research question: How often are different emotions repeated during the online agile retrospective? We conducted a multiple case study with two software companies. We analyzed three recorded online retrospective sessions to seize various emotions. Our findings show that eighteen emotions appear on the agile retrospective. Some of the highest repeated emotions are approval, realization, excitement, relief, disappointment, confusion, optimism, and disapproval.
Article
Full-text available
La habilidad de psicoterapeutas para reconocer emociones se asocia positivamente tanto al proceso como al resultado de la psicoterapia. Por tanto, sería deseable que los programas de estudio de psicología aumentasen esta habilidad. Se realizó un estudio correlacional para evaluar si el estudiar psicología se asocia a un aumento de la habilidad para reconocer emociones, medida con una versión en castellano del Emotional Intelligence Quiz. Este instrumento evalúa el reconocimiento de emociones en veinte fotografías que muestran expresiones faciales y corporales. La muestra consistió en 216 estudiantes de psicología (163 mujeres, 46 hombres y 5 de género no binario) de una universidad privada chilena. Se observó una amplia variación en el reconocimiento de diferentes emociones. Los puntajes fueron superiores en segundo y tercer año comparados con cuarto y quinto. Esto sugiere que el estudiar psicología no mejora la capacidad de reconocer emociones. Los niveles de reconocimiento emocional en esta muestra fueron menores a los reportados en otros países. Se plantean posibles interpretaciones y se sugieren intervenciones para mejorar la habilidad para reconocer emociones. Palabras Clave: Habilidad para reconocer emociones; estudiantes de psicología ABSTRACT The ability of psychotherapists to recognize emotions is positively associated with both the process and outcome of psychotherapy. Therefore, it would be desirable for psychology curricula to increase this ability. A correlational study was conducted to assess whether studying psychology is associated with an increase in the ability to recognize emotions, measured with a Spanish version of the Emotional Intelligence Quiz. This instrument assesses emotion recognition in twenty photographs showing facial and body expressions. The sample consisted of 216 psychology students (163 women, 46 men and 5 of non-binary gender) from a private Chilean university. A wide variation in the recognition of different emotions was observed. Scores were higher in the second and third years than in the fourth and fifth years. This suggests that studying psychology does not improve the ability to recognize emotions. Emotion recognition levels in this sample were lower than those reported in other countries. Possible interpretations are raised and interventions to improve the ability to recognize emotions are suggested. Keywords: emotion recognition ability; psychology students RESUMO A habilidade de psicoterapeutas para reconhecer emoções se associa positivamente tanto ao processo quanto ao resultado da psicoterapia. Portanto, sería importante que os programas de estudo de Psicologia aumentassem essa habilidade. Foi realizado um estudo correlacionar para avaliar se o estudo de psicologia está associado a um aumento na habilidade para reconhecer emoções, medida com uma versão em espanhol castellano do Emotional Intelligence Quiz. Este instrumento avalia o reconhecimento das emoções em vinte fotografias que mostram expressões corporais e faciais. A amostra consistiu em 216 estudantes de psicologia (166 mulheres, 48 homens, e 5 pessoas de gênero não-binário) de uma universidade privada chilena. Observou-se uma ampla variação no reconhecimento de diferentes emoções. As pontuações apresentadas pelo no segundo e terceiro ano foram superiores ao quarto e o quinto. Isto sugere que estudar psicologia não melhora a capacidade de reconhecer emoções. Os níveis de reconhecimento emocional nesta amostra foram menores que os reportados em outros países. Elaboram-se possíveis interpretações e intervenções são sugeridas para melhorar a habilidade de reconhecer emoções. Palavras-chave: habilidades para reconhecer emoções; estudantes de psicologia.
Article
Full-text available
Emotion understanding (EU) ability is associated with healthy social functioning and psychological well-being. Across three studies, we develop and present validity evidence for the Core Relational Themes of Emotions (CORE) Test. The test measures people’s ability to identify relational themes underlying 19 positive and negative emotions. Relational themes are consistencies in the meaning people assign to emotional experiences. In Study 1, we developed and refined the test items employing a literature review, expert panel, and confusion matrix with a demographically diverse sample. Correctness criteria were determined using theory and prior research, and a progressive (degrees of correctness) paradigm was utilized to score the test. In Study 2, the CORE demonstrated high internal consistency and a confirmatory factor analysis supported the unidimensional factor structure. The CORE showed evidence of convergence with established EU ability measures and divergent relationships with verbal intelligence and demographic characteristics, supporting its construct validity. Also, the CORE was associated with less relational conflict. In Study 3, the CORE was associated with more adaptive and less maladaptive coping and higher well-being on multiple indicators. A set of effects remained, accounting for variance from a widely used EU test, supporting the CORE’s incremental validity. Theoretical and methodological contributions are discussed.
Article
Full-text available
Habitual expressive suppression (i.e., a tendency to inhibit the outward display of one's emotions; hereafter suppression) is often conceptualized as a maladaptive emotion regulation strategy. Yet, is this equally true for suppression of positive and of negative emotions? Across three studies and seven samples (total N > 1300 people) collected in two culturally distinct regions (i.e., Taiwan and the US), we examined the separability and distinct well-being effects of suppressing positive vs. negative emotions. Results consistently showed that (a) people suppressed their positive (vs. negative) emotions less, (b) the construct of suppression of positive (vs. negative) emotions was conceptually farther away from that of suppression of emotions in general, (c) suppression of positive and of negative emotions were only moderately correlated, and (d) only suppression of positive, but not negative, emotions, predicted lower well-being. An internal meta-analysis (k = 52 effect sizes) showed that these associations were robust to the inclusion of age, gender, and region as covariates. Future research may further probe the respective links between suppression of positive and of negative emotions and well-being across more cultural regions and across the life-span.
Article
The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts’ content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.
Article
People express their own emotions and perceive others’ emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Article
Full-text available
Emotion perception is a primary facet of Emotional Intelligence (EI) and the underpinning of interpersonal communication. In this study, we examined meso-expressions-the everyday, moderate-intensity emotions communicated through the face, voice, and body. We theoretically distinguished meso-expressions from other well-known emotion research paradigms (i.e., macro-expression and micro-expressions). In Study 1, we demonstrated that people can reliably discriminate between meso-expressions, and we created a corpus of 914 unique video displays of meso-expressions across a race- and gender-diverse set of expressors. In Study 2, we developed a novel video-based assessment of emotion perception ability: The Meso-Expression Test (MET). In this study, we found that the MET is psychometrically valid and demonstrated measurement equivalence across Asian, Black, Hispanic, and White perceiver groups and across men and women. In Study 3, we examined the construct validity of the MET and showed that it converged with other well-known measures of emotion perception and diverged from cognitive ability. Finally, in Study 4, we showed that the MET is positively related to important psychosocial outcomes, including social well-being, social connectedness, and empathic concern and is negatively related to alexithymia, stress, depression, anxiety, and adverse social interactions. We conclude with a discussion focused on the implications of our findings for EI ability research and the practical applications of the MET.
Article
Full-text available
Recent work on natural categories suggests a framework for conceptualizing people's knowledge about emotions. Categories of natural objects or events, including emotions, are formed as a result of repeated experiences and become organized around prototypes (Rosch, 1978); the interrelated set of emotion categories becomes organized within an abstract-to-concrete hierarchy. At the basic level of the emotion hierarchy one finds the handful of concepts (love, joy, anger, sadness, fear, and perhaps, surprise) most useful for making everyday distinctions among emotions, and these overlap substantially with the examples mentioned most readily when people are asked to name emotions (Fehr & Russell, 1984), with the emotions children learn to name first (Bretherton & Beeghly, 1982), and with what theorists have called basic or primary emotions. This article reports two studies, one exploring the hierarchical organization of emotion concepts and one specifying the prototypes, or scripts, of five basic emotions, and it shows how the prototype approach might be used in the future to investigate the processing of information about emotional events, cross-cultural differences in emotion concepts, and the development of emotion knowledge.
Article
Full-text available
Understanding the degree to which human facial expressions co-vary with specific social contexts across cultures is central to the theory that emotions enable adaptive responses to important challenges and opportunities1–6. Concrete evidence linking social context to specific facial expressions is sparse and is largely based on survey-based approaches, which are often constrained by language and small sample sizes7–13. Here, by applying machine-learning methods to real-world, dynamic behaviour, we ascertain whether naturalistic social contexts (for example, weddings or sporting competitions) are associated with specific facial expressions¹⁴ across different cultures. In two experiments using deep neural networks, we examined the extent to which 16 types of facial expression occurred systematically in thousands of contexts in 6 million videos from 144 countries. We found that each kind of facial expression had distinct associations with a set of contexts that were 70% preserved across 12 world regions. Consistent with these associations, regions varied in how frequently different facial expressions were produced as a function of which contexts were most salient. Our results reveal fine-grained patterns in human facial expressions that are preserved across the modern world.
Article
Full-text available
Central to the study of emotion is evidence concerning its universality, particularly the degree to which emotional expressions are similar across cultures. Here, we present an approach to studying the universality of emotional expression that rules out cultural contact and circumvents potential biases in survey-based methods: A computational analysis of apparent facial expressions portrayed in artwork created by members of cultures isolated from Western civilization. Using data-driven methods, we find that facial expressions depicted in 63 sculptures from the ancient Americas tend to accord with Western expectations for emotions that unfold in specific social contexts. Ancient American sculptures tend to portray at least five facial expressions in contexts predicted by Westerners, including “pain” in torture, “determination”/“strain” in heavy lifting, “anger” in combat, “elation” in social touch, and “sadness” in defeat-supporting the universality of these expressions.
Article
Full-text available
We experience a rich variety of emotions in daily life, and a fundamental goal of affective neuroscience is to determine how these emotions are represented in the brain. Recent psychological studies have used naturalistic stimuli (e.g., movies) to reveal high dimensional representational structures of diverse daily-life emotions. However, relatively little is known about how such diverse emotions are represented in the brain because most of the affective neuroscience studies have used only a small number of controlled stimuli. To reveal that, we measured functional MRI to obtain blood-oxygen-level-dependent (BOLD) responses from human subjects while they watched emotion-inducing audiovisual movies over a period of 3 hours. For each of the one-second movie scenes, we annotated the movies with respect to 80 emotions selected based on a wide range of previous emotion literature. By quantifying canonical correlations between the emotion ratings and the BOLD responses, the results suggest that around 25 distinct dimensions (ranging from 18 to 36 and being subject-dependent) of the emotion ratings contribute to emotion representations in the brain. For demonstrating how the 80 emotion categories were represented in the cortical surface, we visualized a continuous semantic space of the emotion representation and mapped it on the cortical surface. We found that the emotion categories were changed from unimodal to transmodal regions on the cortical surface. This study presents a cortical representation of a rich variety of emotion categories, which covers many of the emotional experiences of daily living.
Article
Full-text available
Central to our subjective lives is the experience of different emotions. Recent behavioral work mapping emotional responses to 2185 videos found that people experience upwards of 27 distinct emotions occupying a high-dimensional space, and that emotion categories, more so than affective dimensions (e.g., valence), organize self-reports of subjective experience. Here, we sought to identify the neural substrates of this high-dimensional space of emotional experience using fMRI responses to all 2185 videos. Our analyses demonstrated that (1) dozens of video-evoked emotions were accurately predicted from fMRI patterns in multiple brain regions with different regional configurations for individual emotions, (2) emotion categories better predicted cortical and subcortical responses than affective dimensions, outperforming visual and semantic covariates in transmodal regions, and (3) emotion-related fMRI responses had a cluster-like organization efficiently characterized by distinct categories. These results support an emerging theory of the high-dimensional emotion space, illuminating its neural foundations distributed across transmodal regions.
Article
Full-text available
Basic emotion theory (BET) has been, perhaps, the central narrative in the science of emotion. As Crivelli and Fridlund (J Nonverbal Behav 125:1-34, 2019, this issue) would have it, however, BET is ready to be put to rest, facing "last stands" and "fatal" empirical failures. Nothing could be further from the truth. Crivelli and Fridlund's outdated treatment of BET, narrow focus on facial expressions of six emotions, inattention to robust empirical literatures, and overreliance on singular "critical tests" of a multifaceted theory, undermine their critique and belie the considerable advances guided by basic emotion theory.
Article
Full-text available
Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.
Article
Full-text available
It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Article
Full-text available
What emotions do the face and body express? Guided by new conceptual and quantitative approaches (Cowen, Elfenbein, Laukka, & Keltner, 2018; Cowen & Keltner, 2017, 2018), we explore the taxonomy of emotion recognized in facial-bodily expression. Participants (N = 1,794; 940 female, ages 18-76 years) judged the emotions captured in 1,500 photographs of facial-bodily expression in terms of emotion categories, appraisals, free response, and ecological validity. We find that facial-bodily expressions can reliably signal at least 28 distinct categories of emotion that occur in everyday life. Emotion categories, more so than appraisals such as valence and arousal, organize emotion recognition. However, categories of emotion recognized in naturalistic facial and bodily behavior are not discrete but bridged by smooth gradients that correspond to continuous variations in meaning. Our results support a novel view that emotions occupy a high-dimensional space of categories bridged by smooth gradients of meaning. They offer an approximation of a taxonomy of facial-bodily expressions, visualized within an online interactive map. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Full-text available
An enduring focus in the science of emotion is the question of which psychological states are signaled in expressive behavior. Based on empirical findings from previous studies, we created photographs of facial-bodily expressions of 18 states and presented these to participants in nine cultures. In a well-validated recognition paradigm, participants matched stories of causal antecedents to one of four expressions of the same valence. All 18 facial-bodily expressions were recognized at well above chance levels. We conclude by discussing the methodological shortcomings of our study and the conceptual implications of its findings. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Full-text available
How do the emotions of others affect us? The human anterior cingulate cortex (ACC) responds while experiencing pain in the self and witnessing pain in others, but the underlying cellular mechanisms remain poorly understood. Here we show the rat ACC (area 24) contains neurons responding when a rat experiences pain as triggered by a laser and while witnessing another rat receive footshocks. Most of these neurons do not respond to a fear-conditioned sound (CS). Deactivating this region reduces freezing while witnessing footshocks to others but not while hearing the CS. A decoder trained on spike counts while witnessing footshocks to another rat can decode stimulus intensity both while witnessing pain in another and while experiencing the pain first-hand. Mirror-like neurons thus exist in the ACC that encode the pain of others in a code shared with first-hand pain experience. A smaller population of neurons responded to witnessing footshocks to others and while hearing the CS but not while experiencing laser-triggered pain. These differential responses suggest that the ACC may contain channels that map the distress of another animal onto a mosaic of pain- and fear-sensitive channels in the observer. More experiments are necessary to determine whether painfulness and fearfulness in particular or differences in arousal or salience are responsible for these differential responses.
Article
Full-text available
Central to emotion science is the degree to which categories, such as Awe, or broader affective features, such as Valence, underlie the recognition of emotional expression. To explore the processes by which people recognize emotion from prosody, US and Indian participants were asked to judge the emotion categories or affective features communicated by 2,519 speech samples produced by 100 actors from 5 cultures. With large-scale statistical inference methods, we find that prosody can communicate at least 12 distinct kinds of emotion that are preserved across the 2 cultures. Analyses of the semantic and acoustic structure of the recognition of emotions reveal that emotion categories drive the recognition of emotions more so than affective features, including Valence. In contrast to discrete emotion theories, however, emotion categories are bridged by gradients representing blends of emotions. Our findings, visualized within an interactive map, reveal a complex, high-dimensional space of emotional states recognized cross-culturally in speech prosody. © 2019, The Author(s), under exclusive licence to Springer Nature Limited.
Article
Full-text available
In this article, we review recent developments in the study of emotional expression within a basic emotion framework. Dozens of new studies find that upwards of 20 emotions are signaled in multimodal and dynamic patterns of expressive behavior. Moving beyond word to stimulus matching paradigms, new studies are detailing the more nuanced and complex processes involved in emotion recognition and the structure of how people perceive emotional expression. Finally, we consider new studies documenting contextual influences upon emotion recognition. We conclude by extending these recent findings to questions about emotion-related physiology and the mammalian precursors of human emotion.
Article
Full-text available
Flipping behavior under threat Could it be that the brain in a state of emergency or under intense threat operates in a fundamentally different way? Seo et al. found that mice paused when serotonin neurons were transiently stimulated in low- or medium-threat environments, but when this same neural population was stimulated in high-threat environments, mice tried to escape. Recordings from these neurons indicated that movement-related neural tuning flipped between environments. Neural activity decreased when movement was initiated in low-threat environments but increased in high-threat environments. Science , this issue p. 538
Article
Full-text available
Previous work suggests that infant cry perception is supported by an evolutionary old neural network consisting of the auditory system, the thalamocingulate circuit, the frontoinsular system, the reward pathway and the medial prefrontal cortex. Furthermore, gender and parenthood have been proposed to modulate processing of infant cries. The present meta-analysis (N = 350) confirmed involvement of the auditory system, the thalamocingulate circuit, the dorsal anterior insula, the pre-supplementary motor area and dorsomedial prefrontal cortex and the inferior frontal gyrus in infant cry perception, but not of the reward pathway. Structures related to motoric processing, possibly supporting the preparation of a parenting response, were also involved. Finally, females (more than males) and parents (more than non-parents) recruited a cortico-limbic sensorimotor integration network, offering a neural explanation for previously observed enhanced processing of infant cries in these sub-groups. Based on the results, an updated neural model of infant cry perception is presented.
Article
Full-text available
Emotional vocalizations are central to human social life. Recent studies have documented that people recognize at least 13 emotions in brief vocalizations. This capacity emerges early in development, is preserved in some form across cultures, and informs how people respond emotionally to music. What is poorly understood is how emotion recognition from vocalization is structured within what we call a semantic space, the study of which addresses questions critical to the field: How many distinct kinds of emotions can be expressed? Do expressions convey emotion categories or affective appraisals (e.g., valence, arousal)? Is the recognition of emotion expressions discrete or continuous? Guided by a new theoretical approach to emotion taxonomies, we apply large-scale data collection and analysis techniques to judgments of 2,032 emotional vocal bursts produced in laboratory settings (Study 1) and 48 found in the real world (Study 2) by U.S. English speakers (N = 1,105). We find that vocal bursts convey at least 24 distinct kinds of emotion. Emotion categories (sympathy, awe), more so than affective appraisals (including valence and arousal), organize emotion recognition. In contrast to discrete emotion theories, the emotion categories conveyed by vocal bursts are bridged by smooth gradients with continuously varying meaning. We visualize the complex, high-dimensional space of emotion conveyed by brief human vocalization within an online interactive map.
Article
Full-text available
On the basis of the proposition that love promotes commitment, the authors predicted that love would motivate approach, have a distinct signal, and correlate with commitment-enhancing processes when relationships are threatened. The authors studied romantic partners and adolescent opposite-sex friends during interactions that elicited love and threatened the bond. As expected, the experience of love correlated with approach-related states (desire, sympathy). Providing evidence for a nonverbal display of love, four affiliation cues (head nods, Duchenne smiles, gesticulation, forward leans) correlated with self-reports and partner estimates of love. Finally, the experience and display of love correlated with commitment-enhancing processes (e.g.. constructive conflict resolution, perceived trust) when the relationship was threatened. Discussion focused on love, positive emotion, and relationships.
Article
Full-text available
At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states - called core affect - influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes.
Article
Full-text available
'Sundowning' in dementia and Alzheimer's disease is characterized by early-evening agitation and aggression. While such periodicity suggests a circadian origin, whether the circadian clock directly regulates aggressive behavior is unknown. We demonstrate that a daily rhythm in aggression propensity in male mice is gated by GABAergic subparaventricular zone (SPZGABA) neurons, the major postsynaptic targets of the central circadian clock, the suprachiasmatic nucleus. Optogenetic mapping revealed that SPZGABA neurons receive input from vasoactive intestinal polypeptide suprachiasmatic nucleus neurons and innervate neurons in the ventrolateral part of the ventromedial hypothalamus (VMH), which is known to regulate aggression. Additionally, VMH-projecting dorsal SPZ neurons are more active during early day than early night, and acute chemogenetic inhibition of SPZGABA transmission phase-dependently increases aggression. Finally, SPZGABA-recipient central VMH neurons directly innervate ventrolateral VMH neurons, and activation of this intra-VMH circuit drove attack behavior. Altogether, we reveal a functional polysynaptic circuit by which the suprachiasmatic nucleus clock regulates aggression.
Article
Full-text available
Uniquely, with respect to Middle Pleistocene hominins, anatomically modern humans do not possess marked browridges, and have a more vertical forehead with mobile eyebrows that play a key role in social signalling and communication. The presence and variability of browridges in archaic Homo species and their absence in ourselves have led to debate concerning their morphogenesis and function, with two main hypotheses being put forward: that browridge morphology is the result of the spatial relationship between the orbits and the brain case; and that browridge morphology is significantly impacted by biting mechanics. Here, we virtually manipulate the browridge morphology of an archaic hominin (Kabwe 1), showing that it is much larger than the minimum required to fulfil spatial demands and that browridge size has little impact on mechanical performance during biting. As browridge morphology in this fossil is not driven by spatial and mechanical requirements alone, the role of the supraorbital region in social communication is a potentially significant factor. We propose that conversion of the large browridges of our immediate ancestors to a more vertical frontal bone in modern humans allowed highly mobile eyebrows to display subtle affiliative emotions.
Article
Full-text available
The functional organization of human emotion systems as well as their neuroanatomical basis and segregation in the brain remains unresolved. Here we used pattern classification and hierarchical clustering to characterize the organization of a range of specific emotion categories in the human brain. We induced 14 emotions (6 "basic", e.g. fear and anger; and 8 "non-basic", e.g. shame and gratitude) and a neutral state using guided mental imagery while participants' brain activity was measured with functional magnetic resonance imaging (fMRI). Twelve out of 14 emotions could be reliably classified from the fMRI signals. All emotions engaged a multitude of brain areas, primarily in midline cortices including anterior and posterior cingulate and precuneus, in subcortical regions, and in motor regions including cerebellum and premotor cortex. Similarity of subjective emotional experiences was associated with similarity of the corresponding neural activation patterns. We conclude that the emotions in this study have distinguishable neural bases characterized by specific, distributed activation patterns in widespread cortical and subcortical circuits, and highlight both overlaps and differences in the locations of these for each emotion. Locally differentiated engagement of these globally shared circuits defines the unique neural activity pattern and the corresponding subjective feeling associated with each emotion.
Article
Full-text available
We present a mathematically based framework distinguishing the dimensionality, structure, and conceptualization of emotion-related responses. Our recent findings indicate that reported emotional experience is high-dimensional, involves gradients between categories traditionally thought of as discrete (e.g., 'fear', 'disgust'), and cannot be reduced to widely used domain-general scales (valence, arousal, etc.). In light of our conceptual framework and findings, we address potential methodological and conceptual confusions in Barrett and colleagues' commentary on our work.
Article
Full-text available
Huddling behaviour in neonatal rodents reduces the metabolic costs of physiological thermoregulation. However, animals continue to huddle into adulthood, at ambient temperatures where they are able to sustain a basal metabolism in isolation from the huddle. This 'filial huddling' in older animals is known to be guided by olfactory rather than thermal cues. The present study aimed to test whether thermally rewarding contacts between young mice, experienced when thermogenesis in brown adipose fat tissue (BAT) is highest, could give rise to olfactory preferences that persist as filial huddling interactions in adults. To this end, a simple model was constructed to fit existing data on the development of mouse thermal physiology and behaviour. The form of the model that emerged yields a remarkable explanation for filial huddling; associative learning maintains huddling into adulthood via processes that reduce thermodynamic entropy from BAT metabolism and increase information about social ordering among littermates.
Article
Full-text available
Significance Claims about how reported emotional experiences are geometrically organized within a semantic space have shaped the study of emotion. Using statistical methods to analyze reports of emotional states elicited by 2,185 emotionally evocative short videos with richly varying situational content, we uncovered 27 varieties of reported emotional experience. Reported experience is better captured by categories such as “amusement” than by ratings of widely measured affective dimensions such as valence and arousal. Although categories are found to organize dimensional appraisals in a coherent and powerful fashion, many categories are linked by smooth gradients, contrary to discrete theories. Our results comprise an approximation of a geometric structure of reported emotional experience.
Article
Full-text available
In contrast to a wealth of human studies, little is known about the ontogeny and consistency of empathy-related capacities in other species. Consolation - post-conflict affiliation from uninvolved bystanders to distressed others - is a suggested marker of empathetic concern in non-human animals. Using longitudinal data comprising nearly a decade of observations on over 3000 conflict interactions in 44 chimpanzees (Pan troglodytes), we provide evidence for relatively stable individual differences in consolation behaviour. Across development, individuals consistently differ from one another in this trait, with higher consolatory tendencies predicting better social integration, a sign of social competence. Further, similar to recent results in other ape species, but in contrast to many human self-reported findings, older chimpanzees are less likely to console than are younger individuals. Overall, given the link between consolation and empathy, these findings help elucidate the development of individual socio-cognitive and -emotional abilities in one of our closest relatives.
Article
Full-text available
We collected and Facial Action Coding System (FACS) coded over 2,600 free-response facial and body displays of 22 emotions in China, India, Japan, Korea, and the United States to test 5 hypotheses concerning universals and cultural variants in emotional expression. New techniques enabled us to identify cross-cultural core patterns of expressive behaviors for each of the 22 emotions. We also documented systematic cultural variations of expressive behaviors within each culture that were shaped by the cultural resemblance in values, and identified a gradient of universality for the 22 emotions. Our discussion focused on the science of new expressions and how the evidence from this investigation identifies the extent to which emotional displays vary across cultures.
Article
Full-text available
Post-aggression consolation is assumed to occur in humans as well as in chimpanzees. While consolation following peer aggression has been observed in children, systematic evidence of consolation in human adults is rare. We used surveillance camera footage of the immediate aftermath of nonfatal robberies to observe the behaviors and characteristics of victims and bystanders. Consistent with empathy explanations, we found that consolation was linked to social closeness rather than physical closeness. While females were more likely to console than males, males and females were equally likely to be consoled. Furthermore, we show that high levels of threat during the robbery increased the likelihood of receiving consolation afterwards. These patterns resemble post-aggression consolation in chimpanzees and suggest that emotions of empathic concern are involved in consolation across humans and chimpanzees.
Article
Full-text available
Across species, oxytocin, an evolutionarily ancient neuropeptide, facilitates social communication by attuning individuals to conspecifics' social signals, fostering trust and bonding. The eyes have an important signalling function; and humans use their salient and communicative eyes to intentionally and unintentionally send social signals to others, by contracting the muscles around their eyes and pupils. In our earlier research, we observed that interaction partners with dilating pupils are trusted more than partners with constricting pupils. But over and beyond this effect, we found that the pupil sizes of partners synchronize and that when pupils synchronously dilate, trust is further boosted. Critically, this linkage between mimicry and trust was bound to interactions between ingroup members. The current study investigates whether these findings are modulated by oxytocin and sex of participant and partner. Using incentivized trust games with partners from ingroup and outgroup whose pupils dilated, remained static or constricted, this study replicates our earlier findings. It further reveals that (i) male participants withhold trust from partners with constricting pupils and extend trust to partners with dilating pupils, especially when given oxytocin rather than placebo; (ii) female participants trust partners with dilating pupils most, but this effect is blunted under oxytocin; (iii) under oxytocin rather than placebo, pupil dilation mimicry is weaker and pupil constriction mimicry stronger; and (iv) the link between pupil constriction mimicry and distrust observed under placebo disappears under oxytocin. We suggest that pupil-contingent trust is parochial and evolved in social species in and because of group life.
Article
Full-text available
Animal welfare is a key issue for industries that use or impact upon animals. The accurate identification of welfare states is particularly relevant to the field of bioscience, where the 3Rs framework encourages refinement of experimental procedures involving animal models. The assessment and improvement of welfare states in animals is reliant on reliable and valid measurement tools. Behavioural measures (activity, attention, posture and vocalisation) are frequently used because they are immediate and non-invasive, however no single indicator can yield a complete picture of the internal state of an animal. Facial expressions are extensively studied in humans as a measure of psychological and emotional experiences but are infrequently used in animal studies, with the exception of emerging research on pain behaviour. In this review, we discuss current evidence for facial representations of underlying affective states, and how communicative or functional expressions can be useful within welfare assessments. Validated tools for measuring facial movement are outlined, and the potential of expressions as honest signals are discussed, alongside other challenges and limitations to facial expression measurement within the context of animal welfare. We conclude that facial expression determination in animals is a useful but underutilised measure that complements existing tools in the assessment of welfare. Link to paper (Open Access): http://www.altex.ch/resources/epub_Descovich_of_170208.pdf
Article
Full-text available
AbstractMost research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life? Perceptual and acoustic differences have been discovered between spontaneous and volitional laughs, but little is known about other emotions. We compared 362 acted vocalizations from seven corpora with 427 authentic vocalizations using acoustic analysis, and 278 vocalizations (139 authentic and 139 acted) were also tested in a forced-choice authenticity detection task (N = 154 listeners). Target emotions were: achievement, amusement, anger, disgust, fear, pain, pleasure, and sadness. Listeners distinguished between authentic and acted vocalizations with accuracy levels above chance across all emotions (overall accuracy 65%). Accuracy was highest for vocalizations of achievement, anger, fear, and pleasure, which also displayed the largest difference in acoustic characteristics. In contrast, both perceptual and acoustic differences
Article
Full-text available
Resolving a ticklish problem What is the neural correlate of ticklishness? When Ishiyama and Brecht tickled rats, the animals produced noises and other joyful responses. During the tickling, the authors observed nerve cell activity in deep layers of the somatosensory cortex corresponding to the animals' trunks. Furthermore, microstimulation of this brain region evoked the same behavior. Just as in humans, mood could modulate this neuronal activity. Anxiety-inducing situations suppressed the cells' firing, and the animal could no longer be tickled. Science , this issue p. 757
Article
Full-text available
The science of emotion has been using folk psychology categories derived from philosophy to search for the brain basis of emotion. The last two decades of neuroscience research have brought us to the brink of a paradigm shift in understanding the workings of the brain, however, setting the stage to revolutionize our understanding of what emotions are and how they work. In this paper, we begin with the structure and function of the brain, and from there deduce what the biological basis of emotions might be. The answer is a brain-based, computational account called the theory of constructed emotion.
Article
Full-text available
A fast, subcortical pathway to the amygdala is thought to have evolved to enable rapid detection of threat. This pathway's existence is fundamental for understanding nonconscious emotional responses, but has been challenged as a result of a lack of evidence for short-latency fear-related responses in primate amygdala, including humans. We recorded human intracranial electrophysiological data and found fast amygdala responses, beginning 74-ms post-stimulus onset, to fearful, but not neutral or happy, facial expressions. These responses had considerably shorter latency than fear responses that we observed in visual cortex. Notably, fast amygdala responses were limited to low spatial frequency components of fearful faces, as predicted by magnocellular inputs to amygdala. Furthermore, fast amygdala responses were not evoked by photographs of arousing scenes, which is indicative of selective early reactivity to socially relevant visual information conveyed by fearful faces. These data therefore support the existence of a phylogenetically old subcortical pathway providing fast, but coarse, threat-related signals to human amygdala.