Article

The expression of the emotions in man and animals

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... genuine laughter) [3] that is intuitive and subcortical. Documented in Charles Darwin's classic work [4], ancestral forms of play homologous to human laughter have been reported in other animals (dogs, chimps, and even rats) [2,5,6] being perceived by Darwin and other authors as a natural response that restores homeostasis across diverse species including homo sapiens. ...
... These studies demonstrated that optimism, which is closely related to spontaneous laughter (i.e. frequency of laughter is associated with optimism) [1][2][3][4], is inversely and independently associated with both cardiovascular and total mortality, being associated with exceptional longevity [33,34]. Our results expand and reinforce the importance of spontaneous laughter in promoting health by demonstrating a significant and objectively-measured reduction on the hallmark adrenal stress-hormone, cortisol, induced by genuine laughter. ...
... A previous meta-analysis of 24 studies (n = 1,612 kids) demonstrated that hospital clowns might contribute to improved psychological well-being and emotional responses in children and adolescents in hospital setting, with four studies reporting reduced levels of salivary cortisol after seeing hospital clowns visits compared with the pre-intervention measurement [35]. The implementation of humor in clinical care has been more widely established in the pediatric population possibly because the psychological motivations behind genuine laughter in adults is more complex, being influenced by the individual's cultural background, education, beliefs, and psychological traits [1][2][3][4]40]. Once developed, however, the interpretation and perception of laughter among adults overcomes cultural and language barriers. ...
Article
Full-text available
Objectives: Laughter as an expression of humor has been recognized as good medicine for centuries. The health benefits of humor-induced well-being remain unclear and thus we conducted a systematic review and meta-analysis of interventional studies to evaluate the impact of spontaneous laughter on stress response as measured by cortisol levels. Design: Systematic review and meta-analysis. Data sources: MEDLINE/PubMed, EMBASE, PsycINFO, Scopus, and Clinicaltrials.gov. Eligibility criteria: Interventional studies, which could be either randomized placebo-controlled trials (RCTs) or quasi-experimental studies, conducted in adults that compared any spontaneous laughter intervention to a controlled setting and reported changes in cortisol levels were selected. Data extraction and synthesis: We examined the impact of laughter on percentage change in cortisol levels by calculating pooled estimates of the absolute differences between arithmetic means before and after interventions as compared to control using random-effects model. Results: Eight studies (315 participants; mean age 38.6) met our inclusion criteria; four were RCTs and four were quasi-experiment studies. Five studies evaluated the impact of watching a humor/comedy video, two studies evaluating laughter sessions administered by a trained laughter therapist, and one study evaluating a self-administered laughter program. Pooling these data showed a significant reduction in cortisol levels by 31.9% (95%CI -47.7% to -16.3%) induced by laughter intervention compared to control group with no evidence of publication bias (P = 0.66). Sensitivity analyses demonstrated that even a single laughter session induced a significant reduction of 36.7% in cortisol (95%CI -52.5% to -20.8%). In addition, analyses including the four RCTs reinforced these results by demonstrating a significant reduction in cortisol levels promoted by laughter as compared to the placebo arm [-37.2% (95%CI -56.3% to -18.1%)]. Conclusions: Current evidence demonstrates that spontaneous laughter is associated with greater reduction in cortisol levels as compared with usual activities, suggesting laughter as a potential adjunctive medical therapy to improve well-being. Trial registration: Registration number: CRD42021267972.
... This is an important development given the influence of dynamic facial emotions on social perception and judgements. [46][47][48] For example, the evolutionary biologist Charles Darwin noted that "the movements of expression give vividness and energy to our spoken words," [46] and the neurologist Duchenne de Boulogne specifically noted that contraction of the orbicularis oculi muscle that causes CFL creates a genuine smile. [47,49] Dr. J. Carruthers In conclusion, the clinical development program for Botox Cosmetic, subsequent regulatory approvals achieved globally, and integration into clinicians' daily aesthetic treatment toolbox for CFL and FHL fulfilled an unmet need worldwide-and had a clear, lasting, and evolving scientific and societal impact. ...
... This is an important development given the influence of dynamic facial emotions on social perception and judgements. [46][47][48] For example, the evolutionary biologist Charles Darwin noted that "the movements of expression give vividness and energy to our spoken words," [46] and the neurologist Duchenne de Boulogne specifically noted that contraction of the orbicularis oculi muscle that causes CFL creates a genuine smile. [47,49] Dr. J. Carruthers In conclusion, the clinical development program for Botox Cosmetic, subsequent regulatory approvals achieved globally, and integration into clinicians' daily aesthetic treatment toolbox for CFL and FHL fulfilled an unmet need worldwide-and had a clear, lasting, and evolving scientific and societal impact. ...
Article
Full-text available
Extrinsic and age-related intrinsic factors contribute to the development of facial lines, including lateral canthal lines (called crow’s feet lines [CFL]) and horizontal forehead lines (FHL). OnabotulinumtoxinA is a highly effective treatment for facial lines that inhibits acetylcholine release at the neuromuscular junction. This temporary chemical denervation leads to localized muscle relaxation and subsequent wrinkle reduction. Early studies of onabotulinumtoxinA treatment for facial neuronal disorders such as dystonia documented improvements in FHL and CFL. After the neurotoxin was approved for treating frown lines (glabellar lines [GL]), individuals requested treatment for other rhytids, and physicians continued assessing use in new areas. Once onabotulinumtoxinA was in clinical trial development, its efficacy and safety for CFL and FHL were successively evaluated as required by the US Food and Drug Administration and by key global health authorities, including those in the European Union, Japan, and China. Allergan, collaborating with leading physicians, established clinical programs that included novel safety and efficacy measures to meet regulatory requirements. Global, phase 3, randomized, controlled studies of CFL and FHL met rigorous primary endpoints. Some countries mandated clinical trial data beyond US and European regulations, and Allergan conducted 11 studies in total, fulfilling diverse regulatory and study population data requirements. Adverse events associated with local spread, including brow and eyelid ptosis, diplopia, headache, and eyelid sensory disorder, were infrequent and well tolerated. Consequently, onabotulinumtoxinA treatment of upper facial lines is now established globally as a highly effective, minimally invasive treatment for patients to achieve a natural appearance and look younger.
... One of the first emotion theories put forth was Charles Darwin's in his seminal book, "On the Expression of the Emotions in Man and Animals" [32]. This book proposed that humans possess a finite set of biologically privileged emotions that evolved to confer upon us survival-related behavior. ...
... Fridlund's behavioral ecology perspective of emotion argues that emotional expression evolved primarily as a means of signaling behavioral intent [53]. The view that facial expression evolved specifically as a way to forecast behavioral intentions and consequences to others drew from Darwin's seminal writings on expression [32], even though Darwin himself argued that expressions did not evolve for social communication per se. Fridlund's argument is based on the idea that perceiving behavioral intentions is adaptive. ...
Preprint
Full-text available
The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion", coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.
... The expression of emotion is a significant area of research in affect science. An early and influential theory suggests that humans possess innate abilities to express a set of basic emotions, including happiness, sadness, anger, fear, disgust, and surprise (e.g., Darwin, 1965;Ekman & Friesen, 1971;Izard, 2009). However, empirical research has produced mixed findings. ...
... The functions of emotional expressions have also been widely debated. While some theories propose that emotional expressions have evolutionary roots and help us survive (e.g., the widened eyes of fear increase perception; e.g., Darwin, 1965;Ekman & Friesen, 1971;Izard, 1994), others suggest that emotional expressions primarily serve social purposes, providing a means of communicating information to others (e.g., Kraut & Johnston, 1979;Janney & Arndt, 1992;Shariff & Tracy, 2011). Importantly however, the majority of this literature has focused on how people express emotion in intra-personal contexts, or in front of other adults. ...
Conference Paper
Full-text available
In many cultures, adults use simple, slow, and dynamic speech when talking to infants ("parentese," or infant-directed speech) and make expansive, repetitive movements when demonstrating object properties to infants ("motionese," or infant-directed actions). These modifications enhance infants' attention to and learning about language and goal-directed actions. Adults' interactions with infants are also full of emotions—do adults also modify their emotional expressions when interacting with in-fants? Here we showed parents of infants (aged 7 to 14 months; N = 25) emotion-evoking pictures including colorful bubbles, adorable stuffed animals, yummy snacks, broken toys, dangerous fire, and rotten fruits. We asked parents to describe their feelings about these pictures either to their infant or to an adult partner (i.e., an experimenter). While the parents' use of emotion words did not differ between conditions, their emotional expressions did: Their infant-directed emotional expressions were more positive when they discussed positive pictures and more negative when they discussed negative pictures compared to their adult-directed emotional expressions. These findings suggest that besides "parentese" and "motionese," there is also a unique form of emotional communication in parent-child interaction—"emotionese."
... One of the first emotion theories put forth was Charles Darwin's in his seminal book, "On the Expression of the Emotions in Man and Animals" [32]. This book proposed that humans possess a finite set of biologically privileged emotions that evolved to confer upon us survival-related behavior. ...
... Fridlund's behavioral ecology perspective of emotion argues that emotional expression evolved primarily as a means of signaling behavioral intent [53]. The view that facial expression evolved specifically as a way to forecast behavioral intentions and consequences to others drew from Darwin's seminal writings on expression [32], even though Darwin himself argued that expressions did not evolve for social communication per se. Fridlund's argument is based on the idea that perceiving behavioral intentions is adaptive. ...
Article
Full-text available
The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. Whereas recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of “emotion,” coupled with the inherently subjective nature of emotions and their intricate nuances. In this paper, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos.We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a “Holy Grail” research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.
... The embodiment effects that can be found in the power posture derive from the state of hierarchy already found in various animal species. Darwin had already pointed out similarities between the erect posture of some particularly proud and powerful human beings and the expansive posture of the dominant animals (Darwin, 1965). In some species of chimpanzees, for example, the dominant members of the herd frequently assume an expansive posture that makes them appear as large and big as possible (De Waal & Waal, 2007;Tiedens & Fragale, 2003), while on the other hand, the members that are low in the hierarchy of the herd tend to assume a more constrictive posture that leads them to occupy very little space and, in consequence, to make their bodies to appear smaller than they are (De Waal & Waal, 2007). ...
... For each participant, the rejection of the offer was coded as 0, whereas its acceptance was coded as 1. Therefore, the Dependent Variable (DV) was an index, ranging from 0 to 1, that measured the average proportion of accepted offers (the average acceptance rate) for the Unfair (20,30) and Fair (40, 50) offers, where 0 indicates that the offers were always rejected, whereas 1 indicates that the offers were always accepted. Data analysis was performed using Statistica 8.0 software (StatSoft). ...
Preprint
Full-text available
Purpose Recent studies have explored the effect of specific body postures on dominance perception. We investigated whether the high-power posture (i.e. an open and expansive posture) and the low-power posture (i.e. a closed and contracted posture) can influence the decision-making process involved in the Ultimatum Game (UG). Specifically, we considered the effect that the posture assumed by the counterpart can have on the participants. Design/methodology/approach Participants were shown images that presented avatar opponents in a high-power or low-power posture while they played the role of the responder (Experiment 1) or the proposer (Experiment 2). Findings In Experiment 1 results show that the responders’ decision was strongly driven by the fairness of the offer rather than the posture of the proposer (avatar). Offers perceived as fair were much more often accepted than unfair ones regardless of the counterparts’ posture. An additional result also shows that among the unfair offers, those formulated by low power posture proposers were more accepted than those formulated by the high-power posture proposer. For Experiment 2 results show instead that the proposers modulated their offers based on the posture of the responders (avatar). Specifically, responders with high power-posture were proposed more generous offers than those with low power-posture. Originality/value As far as we know, our study is the first one that takes into account the posture of the counterpart and the effect that the conveyed power can have on the participants’ decision-making processes. Another novelty element is that we compare the high-power posture and the low-power posture considering both the effects of responders’ postures on proposers’ allocation and the effects of proposers’ postures on the responders’ decision to accept the offer.
... Specific facial expressions (e.g., smiling and frowning) have been linked to various emotions (e.g., happiness intensity and anger) [19,41]. Information receivers frequently respond to momentary facial changes associated with emotions [24] and are more likely to react to the facial emotion expression. ...
Article
Full-text available
People’s shopping habits are changing because of live streaming as a new business context. Based on this context, this study explores the effect of influencer happiness intensity on audience purchasing using the hard archival data from TikTok, the world’s top live-streaming app in terms of downloads. The influencers’ facial happiness intensity, attractiveness score, and visual attributions of the live streaming room were extracted from the videos using a machine learning method, and the estimation model was built. The findings indicate an inverted U-shape relationship between influencer happiness intensity and audience purchase, with accessories having the highest sales volume and an influence happiness intensity of 0.43. The additional propensity score matching test provides causal evidence for the negative impact of influencers’ happiness intensity on sales volume after exceeding the peak intensity, which is the subject of this paper. Furthermore, we investigate the probable underlying mechanism of the influencer-audience interaction. It demonstrates that the danmaku quantity and quality will have a partial mediation effect in the process of influencer happiness intensity influencing sales volume. These findings aid live-streaming influencers and platforms in optimizing their marketing strategies.
... Basic emotions (disgust, happy, sad, anger, surprise, fear) are a special set of emotions that are the most distinct and universal across cultures and species and most related to survival [5]. Disgust is thought to be one of the earliest emotions to have been developed evolutionarily due to how critical it is for avoiding contaminants and increasing survival [7][8][9]. Furthermore, feelings of disgust may have evolved to serve as the basis of feelings of contempt and socio-moral evaluations [10]. Autistic children may show differences in disgust processing, potentially leading to pica behaviors, gastrointestinal (GI) disturbances, and socio-emotional issues [11,12]. ...
Article
Full-text available
Purpose of Review While there are reports of differences in emotion processing in autism, it is less understood whether the emotion of disgust, in particular, plays a significant role in these effects. Here, we review literature on potential disgust processing differences in autism and its possible associations with autistic traits. Recent Findings In autism, there is evidence for differences in physical disgust processing, pica behaviors, attention away from other’s disgust facial expressions, and differences in neural activity related to disgust processing. In typically developing individuals, disgust processing is related to moral processing, but modulated by individual differences in interoception and alexithymia. Summary Autistic individuals may experience atypical disgust, which may lead to difficulty avoiding contaminants and affect socio-emotional processing. In autism, such outcomes may lead to increased occurrences of illness, contribute to gastrointestinal issues, diminish vicarious learning of disgust expression and behaviors, and potentially contribute to differences in processes related to moral reasoning, though further research is needed.
... Anxiety is an evolved response that is similar in a wide range of species, where it plays a key role in threat avoidance (LeDoux, 2012b). Darwin's initial work on the expression of emotions across species remains inspirational (Darwin, 1998;Ekman, 2003). It also provides an important foundation for a great deal of work in contemporary psychobiology, for example, the use of standardized facial expression analysis in neuroimaging work (Ekman & Friesen, 1986). ...
... In this work, we have developed an agent that utilizes a set of bodily movements and behaviors, referred to as emotions [Darwin and Prodger 1998], to persuade the player. It is crucial that these emotional expressions are well-designed, clearly defined, and easily comprehensible for individuals to understand the conveyed sentiment [Johnston and Thomas 1981]. ...
Conference Paper
This study investigated the impact of incorrect advice given by virtual assistants on people’s trust. An interactive memory game using a virtual robotic agent was developed, and a study was conducted with 25 participants. The study was divided by conditions; in some, the agent gave wrong clues about cards; in others, it gave the right ones. The results suggest that the players’ perceptions of the agent were affected by its behaviour. Also, the person’s trust towards the agent was affected first by the task itself and secondly by the agent’s behaviour being or not obstructive. The study contributes to a better understanding of how virtual assistants affect human decision-making and reliance and to the development of more engaging and interactive virtual assistants. Future research could use these findings to develop more effective virtual assistants that foster greater user trust and engagement.
... Human voice is capable of conveying emotions through speech (Banse & Scherer, 1996;Darwin & Prodger, 1998;Scherer et al., 1991;Wallbott & Scherer, 1986) and non-speech vocalisations (Cowen et al., 2019). Furthermore, analysis of non-speech vocalisations (voice) and speech has received much recognition during the past decade to detect various emotional/affective states (El Ayadi et al., 2011;Schuller et al., 2011). ...
Chapter
Full-text available
Stress during public speaking is common and adversely affects performance and self-confidence of individuals in various professional contexts. Virtual reality (VR) has been used in psychology and human-computer interaction (HCI) to induce and measure barriers to good public speaking skills, such as anxiety and fear. We reviewed the most significant papers published on major indexed journals across the past 20 years. We outline how minimal research has been conducted to detect stress via auditory means in real time during public speaking. As a measurement criterion, the use of questionnaires and physiological parameters remains prevalent. Furthermore, to induce and provide feedback, past experiments relied on simulated audiences that are either scripted or controlled by outside human agents. In this context, we explore the current complexities, limitations and opportunities for novel systems that can provide more engaging and immersive experiences. Hence, we propose a conceptual framework for the development of a voice analysis-based stress-detection computational algorithmic model that can be integrated into a virtual reality simulation. The implementation of the proposed model would ultimately help users to gradually learn how to overcome their stress in real-time and improve their public speaking performance.KeywordsVirtual realityAffect sensing and analysisNonverbal signalsReal-time feedback systemVoice analysisIntelligent virtual agentsSignal processing
... [107,108] Indeed, the idea that physical expression of emotion influences the emotion itself dates back at least to Charles Darwin who noted, "The free expression, by outward signs, of an emotion intensifies it." [109] Dr. Brin: We are just beginning to understand how botulinum neurotoxin can have direct effects on peripheral motor and sensory nerves and indirectly affect central afferent systems through modifying peripheral input. [110] BoNT has been used as a tool to understand cellular processes, as well as to probe systems biology, for instance, through the use of neuroimaging to assess patterns of brain activation that occurs following peripheral modulation. ...
Article
Full-text available
Botulinum neurotoxins (BoNTs) are multi-domain proteins whose potent and selective actions on nerve endings have led to innovations in both basic and clinical science. The various BoNT domains are responsible for binding to gangliosides and proteins associated with nerve cell membranes, internalization into the cell, and cleavage of one or more SNARE (soluble N-ethylmaleimide sensitive factor attachment protein receptor) proteins necessary for vesicle docking and fusion. Novel modifications to BoNT molecules, such as the creation of chimeras, helped identify the protein domains responsible for various aspects of BoNT action, such as localized effects. Other molecular modifications have been introduced in attempts to increase the specificity of BoNTs for autonomic or sensory neurons, with the ultimate goal of optimizing therapeutic selectivity. This research, in turn, has led to the development of BoNT-based proteins that can target non-SNARE substrates such as phosphatase and tensin homolog (PTEN). Still others are developing different BoNT serotypes, subtypes, or variants that are longer- or shorter-acting or have faster onset for various clinical purposes. New formulations of BoNTs that provide convenience for both patients and physicians are under investigation. Novel clinical uses are being evaluated for onabotulinumtoxinA, including in the prevention of post-operative atrial fibrillation. All these innovations capitalize on the unique properties of BoNTs, which continue to intrigue scientists and clinicians across numerous fields of study.
... [85] The idea that physical expression of emotion influences the emotion itself dates back at least to Charles Darwin who noted, ''The free expression, by outward signs, of an emotion intensifies it." [86] This basic explanation is manifest in the more recent facial feedback hypothesis, which postulates that the action of facial muscles influences the perception of emotion. [87] Indeed, a fundamental principle of the central nervous system is its response to input from the external environment, as in the case of learning to avoid painful stimuli. ...
Article
Full-text available
Studies in the 1920s found that botulinum neurotoxin type A (BoNT/A) inhibited the activity of motor and parasympathetic nerve endings, confirmed several decades later to be due to decreased acetylcholine release. The 1970s were marked by studies of cellular mechanisms aided by use of neutralizing antibodies as pharmacologic tools: BoNT/A disappeared from accessibility to neutralizing antibodies within minutes, although it took several hours for onset of muscle weakness. The multi-step mechanism was experimentally confirmed and is now recognized to consist broadly of binding to nerve terminals, internalization, and lysis or cleavage of a protein (SNAP-25: synaptosomal associated protein-25 kDa) that is part of the SNARE (Soluble NSF Attachment protein REceptor) complex needed for synaptic vesicle docking and fusion. Clinical use of the BoNT/A product onabotulinumtoxinA was based on its ability to reduce muscle contractions via inhibition of acetylcholine from motor terminals. Sensory mechanisms of onabotulinumtoxinA have now been identified, supporting its successful treatment of chronic migraine and urgency in overactive bladder. Exploration into migraine mechanisms led to anatomical studies documenting pain fibers that send axons through sutures of the skull to outside the head-a potential route by which extracranial injections could affect intracranial processes. Several clinical studies have also identified benefits of onabotulinumtoxinA in major depression, which have been attributed to central responses induced by feedback from facial muscle and skin movement. Overall, the history of BoNT/A is distinguished by basic science studies that stimulated clinical use and, conversely, clinical observations that spurred basic research into novel mechanisms of action.
... The origin of EI is found in Darwin's studies on the value of emotional expressiveness for one's living and adapting. C Darwin (1998). Even while traditional definitions of intelligence place an emphasis on cognitive traits like memory and problem-solving, quite a few well-known researchers in the field of intelligence started to recognise the importance of non-cognitive components of intelligence in the 1900s. ...
Article
Full-text available
Emotional intelligence helps to resolve conflict in private and public institutions. Studies have found that increased emotional intelligence enhances performance and increases professionalism in the workforce. Given the massive growth in private sector banks in India, investment in the emotional intelligence of the workforce may improve service delivery and satisfaction of the customers. Using an empirical dataset, this study investigates how EI influences the work performance of private sector bank employees in India. The correlation between increased workplace performance and emotional intelligence is supported by regression analysis. Findings suggest that in scaled up settings, Indian banks can minimise conflicts, reduce costs, and improve service delivery by training their staff in EI.
... Facial expressions are one of the most common human behaviors and play an important role in interpersonal communications (Darwin and Prodger 1998). Facial expression recognition is the fundamental task of the intervention of mental diseases (Bisogni et al. 2022), human-computer interaction (Liu et al. 2017b), and driver safety monitoring (Wilhelm 2019). ...
Article
Compared with the image-based static facial expression recognition (SFER) task, the dynamic facial expression recognition (DFER) task based on video sequences is closer to the natural expression recognition scene. However, DFER is often more challenging. One of the main reasons is that video sequences often contain frames with different expression intensities, especially for the facial expressions in the real-world scenarios, while the images in SFER frequently present uniform and high expression intensities. Nevertheless, if the expressions with different intensities are treated equally, the features learned by the networks will have large intra-class and small inter-class differences, which are harmful to DFER. To tackle this problem, we propose the global convolution-attention block (GCA) to rescale the channels of the feature maps. In addition, we introduce the intensity-aware loss (IAL) in the training process to help the network distinguish the samples with relatively low expression intensities. Experiments on two in-the-wild dynamic facial expression datasets (i.e., DFEW and FERV39k) indicate that our method outperforms the state-of-the-art DFER approaches. The source code will be available at https://github.com/muse1998/IAL-for-Facial-Expression-Recognition.
... Extensive studies on emotions began to be conducted in the 19th century, with a well-known study conducted by Darwin,4 who reported that emotion is a product of evolution and that emotions induce actions favorable to survival. 5 Numerous emotional modalities and their respective physiological responses have been studied. 6,7 Emotional states are known to affect cognitive and athletic abilities, and they are reported to affect both human-human and human-machine interactions. ...
Article
The ability to estimate the current mood states of web users has considerable potential for realizing user-centric opportune services in pervasive computing. However, it is difficult to determine the data type used for such estimation and collect the ground truth of such mood states. Therefore, we built a model to estimate the mood states from search-query data in an easy-to-collect and non-invasive manner. Then, we built a model to estimate mood states from mobile sensor data as another estimation model and supplemented its output to the ground-truth label of the model estimated from search queries. This novel two-step model building contributed to boosting the performance of estimating the mood states of web users. Our system was also deployed in the commercial stack, and large-scale data analysis with >11 million users was conducted. We proposed a nationwide mood score, which bundles the mood values of users across the country. It shows the daily and weekly rhythm of people's moods and explains the ups and downs of moods during the COVID-19 pandemic, which is inversely synchronized to the number of new COVID-19 cases. It detects big news that simultaneously affects the mood states of many users, even under fine-grained time resolution, such as the order of hours. In addition, we identified a certain class of advertisements that indicated a clear tendency in the mood of the users who clicked such advertisements.
... It encompasses a wide range of behaviours as all movements can be considered expressive to some degree including facial, vocal and postural expressions, as well as touch, proxemics and gaze (Wiener, Devoe, Rubinow & Geller, 1972). Several theoretical accounts predict that humans are highly efficient at communicating internal states (Bogdan, 1997) like emotions (Darwin, 1872;Ekman, 1992) or social intentions (Fridlund, 1994) non-verbally. Pertinent to the present research, the Behavioural Ecology Theory of NVB (Crivelli & Fridlund, 2018) was developed to account for audience effects on a person's NVB. ...
Article
Full-text available
The present research tested the impact of score-line, spectator presence, and game location on athletes’ nonverbal behavior (NVB). 2 studies (N = 240; N = 160) tested whether the NVB of soccer players differed as a function of score-line, spectator-presence, and game location by asking participants to guess the current score, the dominance, pride, and confidence of soccer players in videos. Results demonstrated that NVB ratings were more negative when the athletes were trailing (less dominant, less proud, and less confident) compared to leading. Study 1 showed more nonverbal expressiveness (social facilitation) of pleasant feelings (being in a high lead) and lower expressiveness (social inhibition) of negative feelings (trailing) in the presence of spectators. Study 2 indicated that leading at away games led to higher nonverbal expressiveness, whereas leading at home games led to less nonverbal expressiveness in the presence of a supportive audience.
... Facial expression is a basis of communication in human interactions [18] and is the most powerful, natural, and universal signal to understand human emotion [19,20]. The usage of FER is prevalent in various fields, especially in HCI systems such as feedback for e-learning enhancement, driver fatigue surveillance, and robotics. ...
Article
Full-text available
Facial expression recognition (FER) is utilized in various fields that analyze facial expressions. FER is attracting increasing attention for its role in improving the convenience in human life. It is widely applied in human–computer interaction tasks. However, recently, FER tasks have encountered certain data and training issues. To address these issues in FER, few-shot learning (FSL) has been researched as a new approach. In this paper, we focus on analyzing FER techniques based on FSL and consider the computational complexity and processing time in these models. FSL has been researched as it can solve the problems of training with few datasets and generalizing in a wild-environmental condition. Based on our analysis, we describe certain existing challenges in the use of FSL in FER systems and suggest research directions to resolve these issues. FER using FSL can be time efficient and reduce the complexity in many other real-time processing tasks and is an important area for further research.
... Charles Darwin's book "The Expression of the Emotions in Man and Animals" in the late 1800s first highlighted the connection between emotional states and facial expressions [22]. Later studies in the 1900s and early 2000s provided more evidence to support this connection, particularly with advanced methods like HPA axis and cardiovascular responses to stress. ...
Preprint
Stress and anxiety can contribute to the development of major health problems such as heart disease, depression, and obesity. Due to its subjective nature, it is challenging to precisely measure human affect by relying on automated approaches. We therefore propose a personalized prediction framework fine-tuned for each participant in lieu of the traditional "one-size-fits-all" machine learning approach. We aim to collect such individualized data via two distinct procedures: 1) a smartphone-based ecological momentary assessment of stress, and 2) Zoom calls. The data collected from these periodic self-reports will include selfie photographs and ecological momentary assessments of affect. To enhance user engagement during the data collection process, we propose the use of gamification, an emerging trend which involves influencing user behavior and lifestyle by incorporating fun and engaging game elements into non-game contexts (e.g., health-related tasks). In addition to developing a standardized platform to achieve more accurate recognition of stress and anxiety, we plan to conduct a concurrent study in which we will capture videos of our subjects undertaking the Stroop Color Word and Amygdala Test and analyze the footage to identify additional significant characteristics that relate to anxiety. This could include features such as head and mouth movements, lip and cheek deformations, eye gaze, and blinking rates. The final results will provide a comparative evaluation of both objective measures of stress. This research project was approved by the University of Hawaii Institutional Review Board.
... Emotions are complex states of feeling caused by neurophysiological changes, which influence many aspects of human life. Charles Darwin hypothesised that emotion evolves along with natural selection and develops universal characteristics across races and cultures [38]. Accordingly, unified definitions of emotion have been proposed and are widely studied in psychology [39][40][41]. ...
Preprint
Full-text available
Mental illnesses are one of the most prevalent public health problems worldwide, which negatively influence people's lives and society's health. With the increasing popularity of social media, there has been a growing research interest in the early detection of mental illness by analysing user-generated posts on social media. According to the correlation between emotions and mental illness, leveraging and fusing emotion information has developed into a valuable research topic. In this article, we provide a comprehensive survey of approaches to mental illness detection in social media that incorporate emotion fusion. We begin by reviewing different fusion strategies, along with their advantages and disadvantages. Subsequently, we discuss the major challenges faced by researchers working in this area, including issues surrounding the availability and quality of datasets, the performance of algorithms and interpretability. We additionally suggest some potential directions for future research.
... Based on the socio-emotional intelligence framework initially developed by Thorndike in 1920 and the work of Darwin (1872Darwin ( /1965, Reuven Bar-On conceived the Bar-On model of emotional-social intelligence. Bar-On's model defined emotional-social intelligence as "a cross-section of interrelated emotional and social competencies, skills and facilitators that determine how effectively we understand and express ourselves, understand others and relate with them, and cope with daily demands" (p. ...
Thesis
Full-text available
Individuals’ career development represents a continuous adaptation process toward professional development and life goals. In this context, an increasing number of studies have recognized emotional intelligence as an important resource to cope with career-related challenges. This doctoral dissertation aims to address three objectives: questioning, extending, and integrating what we know about emotional intelligence in career development. This manuscript first proposes an original review of the limitations in the study of emotional intelligence in the career context. Second, the present findings develop a more nuanced picture of emotional intelligence, examine unexplored career-related outcomes, and present the results of a meta-analysis. This doctoral dissertation conveys significant theoretical and practical implications with regard to the concept of emotional intelligence but also in order to help individuals throughout their career development.
... The process of facial expression recognition (FER) can be for a neurotypical human easy and automatic transfer of information about the person that is being observed [14] [15]. There are three main categories of models in the literature to quantify affective facial behaviours: categorical model -according to Ekman and Friesen [16] six prototypical facial emotions: happy, sad, anger, fear, surprise, disgust with a neutral composition of facial muscles are prominent non-verbal communication tools. ...
... Manual signals consist of signs and fingerspelling, but non-manual signals include, among others, body language and facial expressions (Liwicki & Everingham, 2009). The term "sign language" has been used to a broad range of semiotic systems, from the emotional expression of men and women to the transmission and reception of truly linguistic structures (Darwin & Prodger, 1998). ...
Article
Full-text available
The ability to recognize our feelings and those of others and manage emotions well in ourselves is the major contributing factor to employees’ performance. Emotional intelligence is relevant for predicting employees’ work performance because most jobs require the ability to manage emotions. The objective of the study is to examine the impact of emotional intelligence on employees’ performance in the commercial banks in Dhangadhi City. This study adopted quantitative research design and used descriptive as well as inferential statistical tool to analyze the data. 100 questionnaires were randomly administered to respondents out of which 82 were returned that were used for analysis. Regression analysis was used to analyze the collected data. The study revealed that there was a significant relationship between employees’ emotional intelligence and employee’s job performance. The coefficient of determination between EI and employees job performance was 82 percent. Similarly, the regression analysis also depicted that he emotional intelligence factors positively impacts the performance of employees in the organization. Hence, it is recommended that organization should introduce various training programs and developmental activities related to EI in their workplace for employees at every level in order to create better management of talent and performance on the one hand, and more effective team outcomes, service outcomes and relationship outcomes on the other.
Article
Full-text available
The present study used microdissection, histology, and microcomputed tomography (micro-CT) with the aims of determining the prevalence and patterns of the depressor septi nasi (DSN) and orbicularis oris (OOr) muscles attached to the footplate of the medial crus (fMC) of the major alar cartilage, focusing on their crossing fibers. The DSN and OOr attached to the fMC of the major alar cartilage were investigated in 76 samples from 38 embalmed Korean adult cadavers (20 males, 18 females; mean age 70 years). The DSN, OOr, or both were attached to the fMC. When the DSN ran unilaterally or was absent, some OOr fibers ascended to attach to the fMC instead of the DSN in 20.6% of the samples. Crossing fibers of the DSN or OOr attached to the fMC were found in 82.4% of the samples. Bilateral and unilateral crossing fibers were found in 32.4% and 50.0%, respectively, and no crossing fibers were found in 17.6%. The DSN and OOr that attached to the fMC could be categorized into six types according to presence of the DSN and the crossing patterns of the DSN and OOr. Anatomical findings of the DSN and OOr that attached to the fMC were confirmed in histology and micro-CT images. These findings offer insights on anatomical mechanisms that may underlie the dynamic pulling forces generated by muscles that attach to the fMCs and on evolutionary variation observed in human facial expressions. They can also provide useful information for guiding rhinoplasty of the nasal tip.
Article
Full-text available
Posits that comparisons of the aggressiveness of animals and man have exaggerated the relative nonviolence of animals. Through territorial or dominance behavior, animals not only do bodily harm to one another, but also condemn one another to poor reproductive prospects, starvation and death. A review of situations provoking aggression identifies the strongest stimulus as (a) a member of one's own kind and sex, (b) closeby, (c) behaving aggressively, and (d) above all, a stranger. The avoidance of proximity, the evocation of actions incompatible with fighting, and the lowering of arousal tend to produce situations which ameliorate aggression. The speculation is offered that familiarity is the most important single factor in reducing the probability of animal aggression. (56 ref)
Preprint
Full-text available
Preclinical biomedical research depends on accurately detecting and quantifying rodent behavior. Current data-driven approaches, which segment free exploratory behavior into clusters, suffer from low statistical power due to multiple-testing, exhibit poor transferability across experiments, and fail to exploit the rich behavioral profiles of individual animals. We introduce a pipeline to capture each animal’s behavioral flow, yielding a single metric based on all observed transitions between clusters. By stabilizing these clusters through machine learning, we ensure data transferability, while dimensionality reduction techniques facilitate detailed analysis of individual animals. We provide a unique dataset of 443 behavior recordings of freely moving mice - including stress exposures, pharmacological and brain circuit interventions - to identify hidden treatment effects, reveal subtle variations on the level of individual animals, and detect brain processes underlying specific interventions. Our pipeline, compatible with popular clustering methods, significantly enhances statistical power and enables predictions of an animal’s future behavior.
Chapter
Human perception is shaped by the interplay and competition of the senses: by synaesthesia. This is bound to the body in a double sense. It (1) binds individual perception to the body of the perceiver(s) and is (2) the reason why we ‘primordially’ ‘apprehend’ the body of our counterpart in our own perception (Husserl/Schütz). In my paper, I attempt to extend and rework Plessner’s analysis of synaesthesia ‘protosociologically’ (Luckmann) and interaction-theoretically: into a preliminary theory of the pre-linguistic sense- and body/body-based constitution (not: construction!) of intersubjectivity as well as the resulting – predominantly pre-reflexive – subjectively shaped reciprocity and communication imaginaries. With this attempt I fulfill – at least to some extent – what Anne Honer once wished of me: the translation of the ‘philosophical’ into a hermeneutically sociologically grounded anthropology of knowledge.KeywordsSensesSynesthesiaPerceptionAnthropologyCommunicationSensationsTranscendenceBoundaries
Article
Full-text available
Classifying the correct emotion from different data sources such as text, images, videos, and speech has been an inspiring research area for researchers from various disciplines. Automatic emotion detection from videos and images is one of the most challenging tasks that have been analyzed using supervised and unsupervised machine learning methods. Deep learning has been also employed where the model has been trained by facial and body features using pose and landmark detectors and trackers. In this paper, facial and body features extracted by the OpenPose tool have been used for detecting basic 6, 7 and 9 emotions from videos and images by a novel deep neural network framework which combines the Gaussian mixture model with CNN, LSTM and Transformer to generate the CNN-LSTM model and CNN-Transformer model with and without Gaussian centers. The experiments which were conducted using two benchmark datasets, namely FABO and CK+, showed that the proposed transformer model with 9 and 12 Gaussian centers with video generation approach was able to achieve close to 100% classification accuracy for the FABO dataset which outperforms the other DNN frameworks for emotion detection. It reported over 90% accuracy for most combinations of features for both datasets leading to a comparable framework for video emotion classification.
Chapter
The stress reaction system consists of two different levels, the soma, and the mind. In response to critical life events, so-called stressors, there is a coordinated and energizing response, which maintains physical integrity and psychic well-being. Stressful adverse experiences in early childhood are associated with decreased stress reactivity and increased impulsive behavior. In young children, regardless of the actual stressors, three different types of stress reactions are distinguished, based on their potential to cause lasting physiological disorders: a positive, a tolerable and a toxic stress. Prenatal stress, stress in infancy and in separation situations are discussed.Pain contains both sensory and affective dimensions. Pain and stress are processed in the same brain regions. Prefrontal cortical areas as well as subcortical areas contribute to the conscious awareness and the affective and cognitive evaluation of pain. Infants and toddlers have not yet acquired the ability to reflectively perceive and express their feelings in a stressful situation. An emotionally too violent or painful experience may be expressed by the body in behavioral manifestations, psychosomatic disorders, or developmental dysfunctions. Unrecognized, or poorly treated pain in childhood leads to serious and long-lasting negative consequences that persist into adulthood.KeywordsStress responseDifferent types of stressSeparation stressPainPain expressions
Chapter
Full-text available
Multi-channel time-series datasets are popular in the context of human activity recognition (HAR). On-body device (OBD) recordings of human movements are often preferred for HAR applications not only for their reliability but as an approach for identity protection, e.g., in industrial settings. Contradictory, the gait activity is a biometric, as the cyclic movement is distinctive and collectable. In addition, the gait cycle has proven to contain soft-biometric information of human groups, such as age and height. Though general human movements have not been considered a biometric, they might contain identity information. This work investigates person and soft-biometrics identification from OBD recordings of humans performing different activities using deep architectures. Furthermore, we propose the use of attribute representation for soft-biometric identification. We evaluate the method on four datasets of multi-channel time-series HAR, measuring the performance of a person and soft-biometrics identification and its relation concerning performed activities. We find that person identification is not limited to gait activity. The impact of activities on the identification performance was found to be training and dataset specific. Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets.KeywordsPerson IdentificationHuman Activity RecognitionSoft-BiometricsExplainable Artificial IntelligenceDeep Neural Network
Preprint
Full-text available
The nuanced detection of rodent behavior in preclinical biomedical research is essential for understanding disease conditions, genetic phenotypes, and internal states. Recent advances in machine vision and artificial intelligence have popularized data-driven methods that segment complex animal behavior into clusters of behavioral motifs. However, despite the rapid progress, several challenges remain: Statistical power typically decreases due to multiple testing correction, poor transferability of clustering approaches across experiments limits practical applications, and individual differences in behavior are not considered. Here, we introduce "behavioral flow analysis" (BFA), which creates a single metric for all observed transitions between behavioral motifs. Then, we establish a "classifier-in-the-middle" approach to stabilize clusters and enable transferability of our analyses across datasets. Finally, we combine these approaches with dimensionality reduction techniques, enabling "behavioral flow fingerprinting" (BFF) for individual animal assessment. We validate our approaches across large behavioral datasets with a total of 443 open field recordings that we make publicly available, comparing various stress protocols with pharmacologic and brain-circuit interventions. Our analysis pipeline is compatible with a range of established clustering approaches, it increases statistical power compared to conventional techniques, and has strong reproducibility across experiments within and across laboratories. The efficient individual phenotyping allows us to classify stress-responsiveness and predict future behavior. This approach aligns with animal welfare regulations by reducing animal numbers, and enhancing information extracted from experimental animals
Book
Quante volte ci siamo imbattuti in dichiarazioni quali “l’emisfero cerebrale sinistro è quello logico, quello destro è quello creativo”, o in sensazionalistici titoli di giornali sulla scoperta dell’area cerebrale dell’amore, del linguaggio, o di chissà quale altra funzione mentale? Oggigiorno, la maggior parte dei neuroscienziati prenderebbe le distanze da affermazioni così semplicistiche, affrettandosi a precisare che il cervello è più complesso di così. E tuttavia, la pratica di ‘mappare’ certe categorie psicologiche entro determinate aree neurali ha svolto, e forse svolge tuttora, un’importante funzione nell’organizzare e riorganizzare le nostre conoscenze sulla mente e sulla sua realizzazione fisica. Ma cosa vuol dire che una funzione cognitiva viene “mappata” in un’area neurale? E poi, quali sono queste funzioni cognitive da mappare? E perché mapparle con aree (definite come poi?) anziché cellule, reti di aree, o altri insiemi di neuroni? I filosofi che affrontano domande simili hanno dato vita all’appassionante dibattito sull’Ontologia Cognitiva. Scopo di questo volume è presentare il dibattito, avvalendosi di tre esempi: la storia dell’area di Broca e quelle più recenti della ricerca correlati neurali delle emozioni di base nonché della cosiddetta “area facciale fusiforme”.
Article
Sentiment analysis, also known as sentiment research or sentiment mining, is the process of analyzing consumer generated online content (be it text, images, or video) to determine their mood. Images are a powerful resource where people can share their thoughts and share their experiences via social media. With the increase of social media users who regularly share their memes, videos and pictures, visual content analysis can help users better understand their views on the topic, problem or thing. Real time Emotional Intelligence has been a field of research. in the last few years. Facial expressions convey non-verbal information from facial interactions. The human machine interface plays an important role in automatic face recognition, which has attracted more and more researchers since the early 1990s. Inspired by this challenge, this paper introduces the Light Emotion Recognition (LWER) system using facial images from the FER (Facial Emotion Recognition) dataset provided in the Kaggle Facial Expression Recognition competition. The emotions depicted in the illustrations were divided into five categories: anger, fear, love, sadness, and happiness. Using a deep convolutional neural network (CNN), it recognizes and interprets the semantic information contained in the facial images. The LWER is based on two-part convolutional neural network (CNN): The first-part removes the background from the picture, and the second part concentrates on the facial feature vector extraction. Experimental results show that CNN can be used effectively for image recognition and the proposed method is proven to be efficient for emotion recognition
Article
Full-text available
Ekman famously contended that there are different channels of emotional expression (face, voice, body), and that emotion recognition ability confers an adaptive advantage to the individual. Yet, still today, much emotion perception research is focussed on emotion recognition from the face, and few validated emotionally expressive full-body stimuli sets are available. Based on research on emotional speech perception, we created a new, highly controlled full-body stimuli set. We used the same-sequence approach, and not emotional actions (e.g., jumping of joy, recoiling in fear): One professional dancer danced 30 sequences of (dance) movements five times each, expressing joy, anger, fear, sadness or a neutral state, one at each repetition. We outline the creation of a total of 150, 6-s-long such video stimuli, that show the dancer as a white silhouette on a black background. Ratings from 90 participants (emotion recognition, aesthetic judgment) showed that intended emotion was recognized above chance (chance: 20%; joy: 45%, anger: 48%, fear: 37%, sadness: 50%, neutral state: 51%), and that aesthetic judgment was sensitive to the intended emotion (beauty ratings: joy > anger > fear > neutral state, and sad > fear > neutral state). The stimuli set, normative values and code are available for download.
Chapter
In today’s world, it is necessary to detect one’s emotions, as emotion can use for public safety by identifying the emotion from facial expressions. In addition to understanding human behavior, human emotion detection also covers identifying mental illnesses and artificially created human emotions. It is still challenging to recognize human emotions accurately, so we use Deep Learning or Machine Learning techniques. Pre-processing, face identification, feature extraction, and emotion classification are typically the four phases of two standard techniques based on geometry and appearance. The existing method suggested a Deep Learning architecture based on Convolutional Neural Networks (CNN) for emotion identification from images. The Dataset used to test is The Facial Emotion Recognition Challenge (FER-2013). For the FER2013, the accuracy rate for different architectures is 53.58%, 60.52%, and 64.4%, respectively. Deep Convolutional Neural Network is used in this work to enhance emotion recognition. The performance of the proposed technique is evaluated with FER2013 and CK+. To get the best accuracy, we experimented with different depths, maximum pooling layers, or alternative topologies and eventually reached 89.42% accuracy. We use this high-accuracy architecture to demonstrate how real-time emotion identification works and gives consistent results using a webcam.KeywordsDeep Convolutional Neural NetworkFERCK+Facial expressionEmotion recognition
Article
Full-text available
In the past decades, it has been increasingly recognized that some areas of science, such as anthropology, have been plagued by racist, Western-centric, and/or sexist biases. Unfortunately, an acculturation process to racism and sexism has been occurring for generations leading to systemic inequities that will take a long time to disappear. Here, we highlight the existence of current examples of how racism, Western-centrism and sexism within: (1) the most popular anatomical atlases used in biological, anthropological and medical education; (2) prominent natural history museums and World Heritage Sites; (3) biological and anthropological scientific research publications; and (4) popular culture and influential children's books and educational materials concerning human biology and evolution.
Article
Perceiving facial expressions is an essential ability for infants. Although previous studies indicated that infants could perceive emotion from expressive facial movements, the developmental change of this ability remains largely unknown. To exclusively examine infants' processing of facial movements, we used point-light displays (PLDs) to present emotionally expressive facial movements. Specifically, we used a habituation and visual paired comparison (VPC) paradigm to investigate whether 3-, 6-, and 9-month-olds could discriminate between happy and fear PLDs after being habituated with a happy PLD (happy-habituation condition) or a fear PLD (fear-habituation condition). The 3-month-olds discriminated between the happy and fear PLDs in both the happy- and fear-habituation conditions. The 6- and 9-month-olds showed discrimination only in the happy-habituation condition but not in the fear-habituation condition. These results indicated a developmental change in processing expressive facial movements. Younger infants tended to process low-level motion signals regardless of the depicted emotions, and older infants tended to process expressions, which emerged in familiar facial expressions (e.g., happy). Additional analyses of individual difference and eye movement patterns supported this conclusion. In Experiment 2, we concluded that the findings of Experiment 1 were not due to a spontaneous preference for fear PLDs. Using inverted PLDs, Experiment 3 further suggested that 3-month-olds have already perceived PLDs as face-like stimuli.
Article
Full-text available
The correct identification of facial expressions is critical for understanding the intention of others during social communication in the daily life of all primates. Here we used ultra-high field fMRI at 9.4T to investigate the neural network activated by facial expressions in awake New World common marmosets from both male and female sex, and to determine the effect of facial motions on this network. We further explored how the face-patch network is involved in the processing of facial expressions. Our results show that dynamic and static facial expressions activate face-patches in temporal and frontal areas (O, PV, PD, MD, AD and PL) as well as the amygdala, with stronger responses for negative faces, also associated with an increase of the monkey’s respiration rates. Processing of dynamic facial expressions involves an extended network recruiting additional regions not known to be part of the face-processing network, suggesting that face motions may facilitate the recognition of facial expressions. We report for the first time in New World marmosets that the perception and identification of changeable facial expressions, vital for social communication, recruit face-selective brain patches also involved in face detection processing, and are associated with an increase of arousal. SIGNIFICANCE STATEMENT: Recent research in humans and non-human primates has highlighted the importance to correctly recognize and process facial expressions to understand others’ emotions in social interactions. The current study focuses on the fMRI responses of emotional facial expressions in the common marmoset (Callithrix jacchus), a New World primate species sharing several similarities of social behavior with humans. Our results reveal that temporal and frontal face-patches are involved in both basic face detection and facial expressions processing. The specific recruitment of these patches for negative faces associated with an increase of the arousal level show that marmosets process facial expressions of their congener, vital for social communication.
Article
Full-text available
En este artículo se realiza la descripción de los patrones entonativos de cinco emociones —enojado, triste, neutral, tranquilo y emocionado— en enunciados aseverativos de un corpus oral actuado de tres hablantes mujeres de Querétaro, México. Este trabajo se justifica tras una breve revisión de los estudios en torno a la prosodia emocional del español y su relación con la teoría bioinformacional. A partir del análisis de los datos se obtiene una caracterización de la curva tonal por emoción, con valores por arriba de los umbrales de percepción, que pueden utilizarse como referencia para futuros estudios descriptivos o de percepción.
Article
Full-text available
Türkçedeki kalıp sözlerin büyük bir bölümü deyimlerden oluşmaktadır. Geniş bir kullanım alanına sahip olan deyimler anlatım esnasında çekime girip fiil olarak da görev yapabilmektedir. Deyimleşmiş birleşik fiil olarak adlandırılan bu dil birimlerinin önemli bir bölümü hareketten ziyade zihinsel bir devinimi tasvir etmektedir. Zihin fiili veya mental fiil olarak değerlendirilen bu unsurların en önemli alt bileşenlerinden biri de duygu fiilleridir. Türkiye Türkçesinde bulunan deyimleşmiş duygu fiillerini tespit etmek ve sınıflandırmak maksadıyla oluşturulan bu çalışmada öncelikle kalıp söz, deyim, deyimleşmiş birleşik fiil, duygu ve duygu fiili kavramları tartışılmış sonrasında ise çeşitli sözlüklerin taranması sonucu elde edilen duygu fiilleri yüklemleri esas alınarak alfabetik olarak sıralanmıştır. Tespit edilen duygu fiilleri iki aşamalı bir değerlendirmeye tabi tutulmuştur. Birinci aşamada fiilin taşıdığı duygu ifadesi olumlu (+) ve olumsuz (-) olarak değerlendirilmiş, ikinci aşamada fiile hâkim olan duygu, temel duygular bağlamında belirlenmiştir. Bu çalışmada, duygu ifadesi taşıyan 707 deyimleşmiş birleşik fiil tespit edilmiştir. Bir kısmı birden çok duyguyu yansıtan bu fiillerden 201’i (%27,6) olumlu, 507’si (%69,6) olumsuz, 20’si (%2,7) ise kullanıldıkları bağlama göre hem olumlu hem de olumsuz duygu ifadesi taşımaktadır. Ayrıca çalışmaya dâhil edilen fiillerde 70 farklı duygu tespit edilmiştir. Ortaya çıkan tabloda olumsuz duygu fiilleri ile üzüntü, öfke, korku gibi olumsuz duyguların ağırlığı dikkat çekmektedir. Bu durum Türk kültüründe olumlu duyguların nispeten daha açık, olumsuz duyguların ise metaforlar vasıtasıyla örtük bir şekilde ifade edildiğini ortaya koymaktadır.
ResearchGate has not been able to resolve any references for this publication.