Conference Paper

Toward Machines With Emotional Intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

For half a century, artificial-intelligence researchers have focused on giving machines linguistic and mathematical-logical reasoning abilities, modelled after the classic linguistic and mathematical-logical intelligences. This chapter describes new research that is giving machines skills of emotional intelligence. Machines have long been able to appear as if they have emotional feelings, but they are now being programmed to also learn when and how to display emotion in ways that enable them to appear empathetic or otherwise emotionally intelligent. They are now being given the ability to sense and recognize expressions of human emotion such as interest, distress, and pleasure, with the recognition that such communication is vital for helping them choose more helpful and less-aggravating behaviour. This chapter presents several examples illustrating new and forthcoming forms of machine emotional intelligence, highlighting applications, together with challenges, to their development.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In this case, comparing computers to a human with autism, referring to the lack of social skills and engagement. Emotional intelligence, also referred to as affective computing, implies to things: 1) computer's comprehension of users' emotions and 2) computer's display of emotions comprehensible by the user [3]. According to [4], affective computing is "computing that relates to, arises from, or deliberately influences emotions." ...
... Incorporating human emotion to co-adaptive systems is essential for a greater usability experience. Emotional intelligence will enable machines to be more considerate to their users' feelings, leading to less frustration and higher productivity [3]. Recognizing human emotion is a preliminary step towards achieving coadaptation. ...
... Emotion recognition can also be achieved through speech [23], eye tracking [26], and EEG [5]. For machines to perceive and comprehend human emotions is only one of two implications for emotional intelligence [3]. The second is for the machine to physically display emotions relatable to the user and relevant to the current environment and situation. ...
Conference Paper
The ability of software systems adapting to human's input is a key element in the symbiosis of human-system co-adaptation, where human and software-based systems work together in a close partnership to achieve synergetic goals. This seamless integration will eliminate the barriers between human and machine. A critical requirement for co-adaptive systems is software system's ability to recognize human emotion, in which the system can detect and interpret users' emotions and adapt accordingly. There are numerous solutions that provide the technologies for emotion recognition. However, selecting an appropriate solution for a given task within a specific application domain can be challenging. The vast variation between these solutions makes the selecting task even more difficult. This paper compares cloud-based emotion recognition services offered by Amazon, Google, and Microsoft. These services detect human emotion through facial expression recognition with the utilization of computer vision. The focus of this paper is to measure the detection accuracy of these services. Accuracy is calculated based on the highest confidence rating returned by each service. All three services have been tested with the same dataset. This paper concludes with findings and recommendations based on our comparative analysis among these services.
... The idea of computers that can anticipate our emotions and react accordingly, first envisioned in the early 2000s [17,20], is potentially attractive in many domains, such as entertainment, education, health care, finance, and services. However, few such applications have emerged, due to various challenges [17]. ...
... The idea of computers that can anticipate our emotions and react accordingly, first envisioned in the early 2000s [17,20], is potentially attractive in many domains, such as entertainment, education, health care, finance, and services. However, few such applications have emerged, due to various challenges [17]. Sensors of physiological metrics are complicated and difficult to deploy or wear outside of the lab. ...
Conference Paper
We explore the concept of interactive technology that is implicitly controlled by emotions, via wearable physiology sensors. We describe a proof-of-concept emotion-reactive stock trading software that interrupts trades that appear to be entered under an unusual amount of stress. We describe a galvanic skin response (GSR) sensor armband that broadcasts stress events via Bluetooth to a prisoner's dilemma game application on an Android phone, which alters its behavior when the user is stressed. We carried out pilot evaluation of this system's effects on the consistency of decisions made under stress.
... Se ei nimestään huolimatta pohjaa affektin käsitteen filosofisiin ja kulttuurintutkimuksellisiin perinteisiin vaan sijoittuu insinööritieteisiin ja yhdistää psykologiaa ja tietotekniikan tutkimukseen. Tutkimussuuntauksen suosio 2000-luvulla osoittaa, että tunteet ovat ainakin joiltain osin tulleet osaksi teknologiatutkimusta. Tutkimussuuntauksen pioneeri Rosalind Picard (2004) on käyttänyt Microsoftin Office-pakettiin vuonna 1996 ilmestynyttä paperiliittimen muotoista animoitua avustajaa Clippyä esimerkkinä koneen epäonnistuneesta tunneviestinnästä. Clippy ei voinut tunnistaa tietokoneen käyttäjän tunteita, eikä reagoida niihin. ...
... Sen sijaan se käyttäytyi itse tilanteessa kuin tilanteessa huomiota herättävän veikeästi, esimerkiksi tanssahteli ja iski silmää, mikä oli omiaan lisäämään sen käyttäjissä aiheuttamaa ärsytystä. Yhtenä ratkaisuna koneiden ja käyttäjien välisiin tunneongelmiin on tutkittu ja kehitetty koneiden kykyä hankkia tunneinformaatiota käyttäjiensä puheesta (Picard 2004). Kuunteluominaisuus, joko puheominaisuuteen yhdistettynä tai siitä erillään, vaikuttaa jälleen omalta osaltaan huomattavasti koneen ja käyttäjän suhteeseen sekä käyttäjän tunnereaktioihin. ...
Article
Koneellisesti tuotettu puhe on sekä teknisesti että sosiaalisesti erityinen teknologian ominaisuus. Tässä artikkelissa esitetään, että puhuvia koneita on syytä tarkastella omana sosioteknisenä kategorianaan, koska ihmismäiseen puheeseen liittyy koneen ja käyttäjän suhteen näkökulmasta monia erityisiä kysymyksiä. Elottomia esineitä on saatu puhumaan esihistoriallisista ajoista lähtien erilaisten äänensiirtotekniikoiden avulla, 1700-luvulta lähtien puhesynteesin avulla ja 1800-luvun lopulta lähtien myös äänitetyn puheen keinoin. 1900-luvun mittaan puhuvista koneista tuli osa modernia, teknistynyttä ääniympäristöä, ja erilaisia puhuvia robotteja esitellään yhä lupauksina tulevaisuuden älykkäistä koneista.Puhuvat koneet herättävät kuitenkin helposti ärsytyksen kaltaisia negatiivisia tunteita. Outoutta tai kauhua on pohdittu ongelmina robottien kohtaamisessa, mutta puheen rooli konekohtaamisissa on jäänyt vähälle huomiolle. Puhe toimijuuden muotona merkitsee ihmisen ja koneen suhteen asettumista neuvottelun alaiseksi erityisellä tavalla, jota kulttuurin- ja historiantutkimus voivat auttaa ymmärtämään.Talk of machines: Theoretical Perspectives to Speech TechnologiesMachinically produced speech is a special technological feature, both technically and socially. This article proposes treating talking machines as a sociotechnical category of their own, in order to pay due attention to the role of human-like speech in the relationship of machines and users. Inanimate objects have been given the power of speech from prehistoric times by diverse techniques of acoustic transmission. From the 18th century onwards they have been made to talk by means of mechanical speech synthesis and from the late 19th century on also by making use of voice recording technologies. During the 20th century, talking machines became a part of the modern, technological soundscape.Talking robots are still being showcased as promises of future machine intelligence. However, talking machines easily induce negative feelings, such as annoyance. The feelings of horror and uncanniness have often been brought up as problems in human-robot interaction, but the role of speech in these encounters has not attracted much attention. Speech as a form of agency negotiates the relationship of humans and machines in specific ways, which cultural studies and historical research can help to understand.
... This shift began most notably with Rosalind Picard's (1997) work in the mid-1990s on 'affective computing.' Since then, the development of increasingly interactive computer systems, including social robots, has created artificial agents able to process and react to a variety of emotional and social scenarios (Picard, 2008). As Atanasoski and Vora (2019:, p. 112) explain, '[p]rogramming emotions into robots is a final frontier in robotics because emotion is increasingly viewed as a sign of intelligence more complex than that displayed by most computers and commonly used robots in industry.' ...
Article
This article examines Artificial Emotion Intelligence (AEI) and its application in social robots. It argues that AEI and social robotics intensify practices of data-capture and algorithmic governance by extending the spatial reach of digital surveillance deeper into intimate spaces and individual psyches, with the goal of manipulating human emotional and behavioural responses. The analysis demonstrates the need to more thoroughly engage the multiplicity of theoretical and applied approaches to building artificial intelligence, to question assumptions as to the kinds of intelligence being created, and to consider how a diversity of AI systems infiltrate and reshape the spaces of everyday life.
... Recently, machine emotional intelligence has become a Table 2 An example of emotion dictionary challenge for researchers in artificial intelligence and other fields [19], [20]. A previous study [7] is related to predicting bug severity by using emotion analysis of a bug report. ...
Article
Many software development teams usually tend to focus on maintenance activities in general. Recently, many studies on bug severity prediction have been proposed to help a bug reporter determine severity. But they do not consider the reporter's expression of emotion appearing in the bug report when they predict the bug severity level. In this paper, we propose a novel approach to severity prediction for reported bugs by using emotion similarity. First, we do not only compute an emotion-word probability vector by using smoothed unigram model (UM), but we also use the new bug report to find similar-emotion bug reports with Kullback-Leibler divergence (KL-divergence). Then, we introduce a new algorithm, Emotion Similarity (ES)-Multinomial, which modifies the original Naïve Bayes Multinomial algorithm. We train the model with emotion bug reports by using ES-Multinomial. Finally, we can predict the bug severity level in the new bug report. To compare the performance in bug severity prediction, we select related studies including Emotion Words-based Dictionary (EWD)-Multinomial, Naïve Bayes Multinomial, and another study as baseline approaches in open source projects (e.g., Eclipse, GNU, JBoss, Mozilla, and WireShark). The results show that our approach outperforms the baselines, and can reflect reporters' emotional expressions during the bug reporting.
... Providing computers with emotional understanding along with their current mathematical-logical capabilities is considered a breakthrough in creating more intelligent and less exacerbating behaviors in Human-Computer Interaction (HCI) applications [1]. An example of an intelligent HCI is exploiting "feeling computers" in enhancing distanceeducation experience. ...
Article
There is an increasing consensus among re- searchers that making a computer emotionally intelligent with the ability to decode human affective states would allow a more meaningful and natural way of human-computer interactions (HCIs). One unobtrusive and non-invasive way of recognizing human affective states entails the exploration of how physiological signals vary under different emotional experiences. In particular, this paper explores the correlation between autonomically-mediated changes in multimodal body signals and discrete emotional states. In order to fully exploit the information in each modality, we have provided an innovative classification approach for three specific physiological signals including Electromyogram (EMG), Blood Volume Pressure (BVP) and Galvanic Skin Response (GSR). These signals are analyzed as inputs to an emotion recognition paradigm based on fusion of a series of weak learners. Our proposed classification approach showed 88.1% recognition accuracy, which outperformed the conventional Support Vector Machine (SVM) classifier with 17% accuracy improvement. Furthermore, in order to avoid information redundancy and the resultant over-fitting, a feature reduction method is proposed based on a correlation analysis to optimize the number of features required for training and validating each weak learner. Results showed that despite the feature space dimensionality reduction from 27 to 18 features, our methodology preserved the recognition accuracy of about 85.0%. This reduction in complexity will get us one step closer towards embedding this human emotion encoder in the wireless and wearable HCI platforms.
... Technology has achieved much in terms of recognizing emotions and measuring physiological changes (e.g., due to stress or frustration), and this information can be used to provide personalized feedback or adjust a machine's performance (Picard, 2002a;Picard, Vyzas, & Healey, 2001). The development of this computational account of emotion suggests that emotions can be understood by machines somewhat reliably and thus can be reduced to algorithms to some degree (Picard, 2002b(Picard, , 2007. This ability to understand human emotion has clear implications for product development and marketing (e.g., Ahn & Picard, 2014a), but also for making human-computer interaction more relevant and meaningful. ...
Article
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.
... It is fair to say that automated emotion detection is still unusual, with the exception of sentiment analysis that is an established practice for brands and organisations seeking to understand public feeling about products, brands, policies, competitors and current affairs (Andrejevic, 2013). Nevertheless, the premise of empathic media has roots in the mid-1990s with the academic work of Rosalind Picard on affective computing -work that continues today (Picard, 2007). In advertising, data collection about emotions is used in two separate ways. ...
Article
Full-text available
Drawing on interviews with people from the advertising and technology industry, legal experts and policy makers, this paper assesses the rise of emotion detection in digital out-of-home advertising, a practice that often involves facial coding of emotional expressions in public spaces. Having briefly outlined how bodies contribute to targeting processes and the optimisation of the ads themselves, it progresses to detail industrial perspectives, intentions and attitudes to data ethics. Although the paper explores possibilities of this sector, it pays careful attention to existing practices that claim not to use personal data. Centrally, it argues that scholars and regulators need to pay attention to the principle of intimacy. This is developed to counter weaknesses in privacy that is typically based on identification. Having defined technologies, use cases, industrial perspectives, legal views and arguments about jurisprudence, the paper discusses this ensemble of perspectives in light of a nationwide survey about how UK citizens feel about the potential for emotion detection in out-of-home advertising.
... This will require not only advancing the development of IATs for cognitive and physical assistance but for emotional support as well. As our results show, intelligent emotional assistants represent a minor proportion of IATs currently developed the successful integration of IATs into standard care.As people with dementia often present emotional disturbances such as anxiety, depression, agitation, and distress[47], IATs programmed to learn "when and how to display emotion in ways that enable the machine to appear empathetic or otherwise emotionally intelligent" will be crucial for the future of care[48].As the list of current applications shows, IATs are not only increasing in number but also in variety. While the first generation of IATs was primarily focused on promoting safety through tracking, alarm prompting, and remote monitoring (e.g., fall detectors and GPS trackers), current IAT applications are designed to support a number of activities including communication, telecare, and entertainment. ...
Article
Full-text available
Intelligent assistive technologies (IATs) have the potential of offering innovative solutions to mitigate the global burden of dementia and provide new tools for dementia care. While technological opportunities multiply rapidly, clinical applications are rare as the technological potential of IATs remains inadequately translated into dementia care. In this article, the authors present the results of a systematic review and the resulting comprehensive technology index of IATs with application in dementia care. Computer science, engineering, and medical databases were extensively searched and the retrieved items were systematically reviewed. For each IAT, the authors examined their technological type, application, target population, model of development, and evidence of clinical validation. The findings reveal that the IAT spectrum is expanding rapidly in volume and variety over time, and encompasses intelligent systems supporting various assistive tasks and clinical uses. At the same time, the results confirm the persistence of structural limitations to successful adoption including partial lack of clinical validation and insufficient focus on patients' needs. This index is designed to orient clinicians and relevant stakeholders involved in the implementation and management of dementia care across the current capabilities, applications, and limitations of IATs and to facilitate the translation of medical engineering research into clinical practice. In addition, a discussion of the major methodological challenges and policy implications for the successful and ethically responsible implementation of IAT into dementia care is provided.
... In one project, the building of a computerized learning companion, two of the key affective states found from the data are interest and bored-two states that are not on most theorists "basic emotions" list. However, discriminating these states is vital to the machine's ability to adapt the pedagogy to help keep the learning experience going, and our model addresses them [12].Emotional Intelligence can be affected by geography, society, people, culture, science etc. Emotion of human can also be predicted by examining various natural phenomenons like heart beat, facial expression, sensing skin characters, eye brows contraction and relaxation, blood pressure, and various speech and voice pattern. There are vast research success in sensing and analyzing these types of human behavior which can be taken data for machine learning. ...
Conference Paper
Full-text available
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to end in tasks such as object recognition, computer games, achieving performance nearly equal to human. Object recognition systems are pretty good in case of facebook and google but cannot recognize like human. Machines are trained thousands and thousands times but cannot recognize and predict the new or other possible instances of object. Human can recognize the object by looking at the back and side view only but it's difficult in case of machine. Human can recognize and understand situation from speech or voice that he hears. Human can predict and feel various pose and action of an object. Human can recognize the situation by looking face of people. A person with bright face is believed as a happy situation and weeping as sad situation. This paper focuses on how machine also can take right decision based on these image, pose or action, voice and sound like human and other animals does.
... However, developments in artificial emotional intelligence could dramatically accelerate the successful integration of IATs into standard care. As people with dementia often present emotional disturbances such as anxiety, depression, agitation, and distress [47], IATs programmed to learn "when and how to display emotion in ways that enable the machine to appear empathetic or otherwise emotionally intelligent" will be crucial for the future of care [48]. ...
... As explained in "Affective Computing" [3], emotions can be considered as a factor to be used when designing a usable system that can provide a more positive and comfortable experience to the user [4]. The intelligence from emotions has been used such as when designing intelligent robots [5], user interfaces [6] and appliances for households [7]. Also, the emotions of a person depend on several factors such as relationships, mental health, personality, and context [4]. ...
Thesis
In the field of Human-Computer Interaction (HCI), improving User Experience (UX) of mobile devices has become a necessity due to the emergence of smart technologies and with the popularity of using mobile devices in day to day life rather than traditional desktop systems. The main aim of this research is to develop a model for a mobile device that can suggest adaptive functionalities, based on the current user emotion and the context. To the best of our knowledge, there are not any systems which provide not only adaptive interfaces but also provides adaptive functionalities within a mobile device which will enhance the acceptability and usability of that particular system. As a proof of concept, a keyboard named “Emotional Keyboard” was developed through five prototypes using Evolutionary Prototyping, iteratively. As the methodology, Action Research together with User-Centered Design (UCD) was followed which also included two user surveys. Initial decisions were taken after conducting the first survey and Prototypes 1, 2 and 3 have been developed and those have been evaluated with the participation of 40 users. Prototype 3 has incorporated an Artificial Neural Network (ANN) that was trained using the data collected during the evaluations of the Prototypes 1 and 2, which can decide the most optimal emotion by combining the emotions from facial expressions and text. Consequently, Prototype 4 and 5 were developed which can suggest the most affective function based on the emotion and the context (location, time and user activity), by incorporating the data that was collected by conducting the second survey and built a “Preference Tree” which consists the probabilities of choosing functions and also considering the frequently used functions. The evaluation of Prototype 4 and 5 have been carried out with the participation of 18 users, where an individual and general analysis has been performed and proved that with time the model was able to correctly suggest adaptive functions. This evaluation yields the conclusion of the research, thus paving the way to an “Adaptive System with User Control” to improve the acceptability and the usability of a mobile device which aligns with the research aim
... Thus, one can have an approximate idea of the type of emotion a person is feeling, especially whether it is a positive or negative one. Since Ekman et al. [61] we know that there are specific arousal patterns for each of the 6 basic emotions he described and considered universal across all cultures (anger, disgust, fear, happiness, sadness and surprise), and with the proper sensors, you can distinguish between them in a reliable form [62][63][64]. Facial expressions are also fundamental for the correct interpretation of emotions. ...
Chapter
Full-text available
This chapter analyzes the ethical challenges in healthcare when intro-ducing medical machines able to understand and mimic human emotions. Artificial emotions is still an emergent field in artificial intelligence, so we devote some space in this paper in order to explain what they are and how we can have an machine able to recognize and mimic basic emotions. We argue that empathy is the key emotion in healthcare contexts. We discuss what empathy is and how it can be modeled to include it in a medical machine. We consider types of medi-cal machines (telemedicine, care robots and mobile apps), and describe the main machines that are in use and offer some predictions about what the near future may bring. The main ethical problems we consider in machine medical ethics are: privacy violations (due to online patient databases), how to deal with error and responsibility concerning machine decisions and actions, social inequality (as a result of people being removed from an e-healthcare system), and how to build trust between machines, patients, and medical professionals.
... humans can assess emotional state with varying degrees of accuracy researchers are making progress giving computers similar abilities[15] [16] [17][18]. ...
... Current rapid development of neurocognitive sciences and new discoveries of mechanisms of natural intelligence trigger new insights and opportunities in the field of biologically inspired cognitive systems. It is more evident now that emotions play a significant role in natural intelligence and adaptive behavior [2,8]. We still can indicate the lack of effective universal models of emotional mechanisms, which could be implemented as artificial cognitive architectures. ...
Chapter
In this paper we present the following hypothesis: the neuromodulatory mechanisms that control the emotional states of mammals could be translated and re-implemented in a computer by means of controlling the computational performance of a hosted computational system. In our specific implementation we represent the simulation of the fear-like state based on the three dimensional neuromodulatory model of affects (here the basic emotional inborn states) that we have inherited from works of Hugo Lövheim. We have managed to simulate 1000 ms of work of the dopamine system using NEST Neural Simulation Tool and the rat brain as the model. We also present the results of that simulation and evaluate them to validate the overall correctness of our hypothesis.
... For all other uses, contact the Owner/Author. measurement tools, such as cameras [13,14] and physiological sensors (brain, heart, muscle, etc.) [15], we will be able to more accurately classify emotions. Investigation into this can help us advance how these measurements obtained in mobile devices (i.e., phones, tablets, watches, etc.) can be used for user feedback for self-regulation. ...
Conference Paper
In 1997 Rosalind Picard introduced fundamental concepts of affect recognition [1]. Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices.
... Eppure prototipi di questo tipo potrebbero essere utili sia per fini applicativi che per fini di ricerca di base. Molte informazioni potrebbero essere comunicate dalle macchine agli uomini adottando un codice "emozionale" (un tentativo in ambito robotico è stato fatto da Picard et al. 2002). D'altro canto, la realizzazione di macchine che comunicano uno stato emotivo potrebbe aiutare i ricercatori a dettagliare i propri modelli teorico/esplicativi circa la genesi e la produzione delle espressioni facciali. ...
... handling an upset customer). However, this capability is in the early stages of development (Picard, 2007). ...
Article
Full-text available
Purpose Service robotics, a branch of robotics that entails the development of robots able to assist humans in their environment, is of growing interest in the hospitality industry. Designing effective autonomous service robots, however, requires an understanding of Human–Robot Interaction (HRI), a relatively young discipline dedicated to understanding, designing, and evaluating robotic systems for use by or with humans. HRI has not yet received sufficient attention in hospitality robotic design, much like Human–Computer Interaction (HCI) in property management system design in the 1980s. This article proposes a set of introductory HRI guidelines with implementation standards for autonomous hospitality service robots. Design/methodology/approach A set of key user-centered HRI guidelines for hospitality service robots were extracted from 52 research articles. These are organized into service performance categories to provide more context for their application in hospitality settings. Findings Based on an extensive literature review, this article presents some HRI guidelines that may drive higher levels of acceptance of service robots in customer-facing situations. Deriving meaningful HRI guidelines requires an understanding of how customers evaluate service interactions with humans in hospitality settings and to what degree those will differ with service robots. Originality/value Robots are challenging assumptions on how hospitality businesses operate. They are being increasingly deployed by hotels and restaurants to boost productivity and maintain service levels. Effective HRI guidelines incorporate user requirements and expectations in the design specifications. Compilation of such information for designers of hospitality service robots will offer a clearer roadmap for them to follow.
... Affective computing is an imperative topic for Human-Computer Interaction, by expanding the quality of human-computer communication and moving forward the insights of the computer [25]. User emotions and emotional communication between the system and the user [19] can be used when designing intelligent robots [20], user interfaces [4] and household appliances [26]. It has been proven that emotions can be incorporated when designing more usable and affective systems [28]. ...
Conference Paper
Affective computing is an imperative topic for Human-Computer Interaction, where user emotions and emotional communication can be utilized to improve the usability of a system. Several strategies are available to detect user emotions but it is questionable when identifying the most suitable and compatible strategy which can be used to detect emotions when using mobile devices. Multimodal emotion recognition paves the path to detect emotions by combining two or more strategies in order to identify the most meaningful emotion. Emotion identification through facial expressions and text analytics has given high accuracies but combining them and practically applying them in the context of a mobile environment should be done. Three prototypes were developed using evolutionary prototyping which can detect emotions from facial expressions and text data, using state of the art APIs and SDKs where the base of the prototypes was a keyboard known as "Emotional Keyboard" which is compatible with Android devices. Evaluations of Prototype 1 and 2 have been performed based on participatory design and reviewed the compatibility of emotion identification through facial expressions and text data in the mobile context. Evaluation of Prototype 3 should be done in the future and a confusion matrix should be built to verify the accuracies by cross-checking with training and validation accuracies that have been obtained when developing the neural network.
... As, the process of dividing the speech signal into frames and extracting large number of features from each frame takes time. Therefore, several studies have reported that utterance-level features have outperformed frame-level features in terms of classifier time, accuracy and efficiency [147,176,202]. However, a study by Nwe et al. [139] concluded that utterance-level features are not suitable which have similar emotional arousal. ...
Article
Full-text available
Speech emotion recognition (SER) systems identify emotions from the human voice in the areas of smart healthcare, driving a vehicle, call centers, automatic translation systems, and human-machine interaction. In the classical SER process, discriminative acoustic feature extraction is the most important and challenging step because discriminative features influence the classifier performance and decrease the computational time. Nonetheless, current handcrafted acoustic features suffer from limited capability and accuracy in constructing a SER system for real-time implementation. Therefore, to overcome the limitations of handcrafted features, in recent years, variety of deep learning techniques have been proposed and employed for automatic feature extraction in the field of emotion prediction from speech signals. However, to the best of our knowledge, there is no in-depth review study is available that critically appraises and summarizes the existing deep learning techniques with their strengths and weaknesses for SER. Hence, this study aims to present a comprehensive review of deep learning techniques, uniqueness, benefits and their limitations for SER. Moreover, this review study also presents speech processing techniques, performance measures and publicly available emotional speech databases. Furthermore, this review also discusses the significance of the findings of the primary studies. Finally, it also presents open research issues and challenges that need significant research efforts and enhancements in the field of SER systems.
... The abilities of social robots to sense and respond to their environments "in a socially acceptable fashion"-including through active forms of communication with humans-raise questions about reductive understandings of AI as a purely calculative form of intelligence. AI and robotics engineers increasingly discuss questions of social and emotional intelligences (Barchard et al., 2020;Picard, 2008), while behavior-based robotics has long understood intelligence as emergent through affective, embodied encounters in the material world (Brooks, 1999). ...
Article
Full-text available
This short commentary calls for further geographic engagement with emerging trends in social robotics and human-robot interaction. While the proliferation of social robots in the spaces of everyday life raises numerous empirical, ethical, and political questions, this paper argues that it also presents an opportunity to prompt theoretical debate around questions of space, intelligence, affect and emotion, and the ‘human.’
... In parallel, trust is also relevant if we want to build social artificial agents that interact alongside people (e.g. robo-advisors, co-working robots, assistive robots, etc.) and take responsible roles in our society 4,5 . A lesson learned from previous research (e.g. ...
Article
Full-text available
Understanding human trust in machine partners has become imperative due to the widespread use of intelligent machines in a variety of applications and contexts. The aim of this paper is to investigate whether human-beings trust a social robot—i.e. a human-like robot that embodies emotional states, empathy, and non-verbal communication—differently than other types of agents. To do so, we adapt the well-known economic trust-game proposed by Charness and Dufwenberg (2006) to assess whether receiving a promise from a robot increases human-trust in it. We find that receiving a promise from the robot increases the trust of the human in it, but only for individuals who perceive the robot very similar to a human-being. Importantly, we observe a similar pattern in choices when we replace the humanoid counterpart with a real human but not when it is replaced by a computer-box. Additionally, we investigate participants’ psychophysiological reaction in terms of cardiovascular and electrodermal activity. Our results highlight an increased psychophysiological arousal when the game is played with the social robot compared to the computer-box. Taken all together, these results strongly support the development of technologies enhancing the humanity of robots.
... Emotionally intelligent computers have clear advantages and applications in AmI. Picard [2000Picard [ , 2007, describes the use of physiological signals and explicit user self-reports to help computers recognize emotion. The same papers also report how different types of human-computer interaction may affect human activity, for example people may interact for longer periods with the computer if it appears to understand and empathize with their emotions. ...
Article
Full-text available
In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.
... The instrument must be easy to wear by the subject without limiting normal activity and causing additional distress [96]. The instrument with good usability can bring a positive experience to the subject [97]. The portable instruments open a new path to the non-intrusive field of assessment of emotions [98]. ...
Article
Full-text available
Recognition of dichotomous emotional states such as happy and sad play important roles in many aspects of human life. Existing literature has recorded diverse attempts in extracting physiological and non-physiological traits to record these emotional states. Selection of the right instrumental approach for measuring these traits plays a critical role in emotion recognition. Moreover, various stimuli have been used to induce emotions. Therefore, there is a current need to perform a comprehensive overview of instrumental approaches and their outcomes for the new generation of researchers. In this direction, this study surveys the instrumental approaches in discriminating happy and sad emotional states that are elicited using audio-visual stimuli. A comprehensive literature review is performed using PubMed, Scopus, and ACM digital library repositories. The reviewed articles are classified with respect to the i) stimulation modality, ii) acquisition protocol, iii) instrumentation approaches, iv) feature extraction, and v) classification methods. In total, 39 research articles were published on the selected topic of instrumental approaches in differentiating dichotomous emotional states using audio-visual stimuli between January 2011 and April 2021. The majority of the papers used physiological traits, namely electrocardiogram, electrodermal activity, heart rate variability, photoplethysmogram, and electroencephalogram based instrumental approaches for recognizing the emotional states. The results show that only a few articles have focused on audio-visual stimuli for the elicitation of happy and sad emotional states. This review is expected to seed research in the areas of standardization of protocols, enhancing the diagnostic relevance of these instruments, and extraction of more reliable biomarkers.
Chapter
Historically, the metaphor of the iron cage, as a key component of Weber’s (1978) sociological imagination has played a central role in organization studies. It did so both in its initial role in the sociology of bureaucracy and in its reinterpretation in institutional terms by subsequent theorists such as DiMaggio and Powell (1983). More recently, iron bars have given way to transparent liquidity as a dominant metaphor. The implications of this shift for the analysis of organization are the subjects of this chapter. We argue that a key technology of the liquidly modern organizational self is that of emotional intelligence and that, while this subject has been much written about, it has not been addressed in terms of its organizational effects on subjects. Technologies of the self are increasingly being developed that represent the possibility of a fusion of effective computing and emotional intelligence that generate new issues for research.
Chapter
Emotion classification based on electroencephalogram (EEG) signals is a relatively new area of research in the development of brain computer interface (BCI) system with challenging issues like induction of the emotional states and the extraction of the features in order to obtain optimum classification of human emotions. The emotion classification system based on BCI can be useful in many areas like as entertainment, education, and health care. This chapter presents a new method for human emotion classification using multiwavelet transform of EEG signals. The EEG signal contains useful information related to the different emotional states, which helps us to understand the psychology and neurology of the human brain. The features namely, ratio of the norms based measure, Shannon entropy measure, and normalized Renyi entropy measure are computed from the sub-signals generated by multiwavelet decomposition of EEG signals. These features have been used as an input to multiclass least squares support vector machine (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for classification of human emotions from EEG signals. The classification performance of the proposed method for classification of emotions using EEG signals determined by computing the classification accuracy, ten-fold crossvalidation, and confusion matrix. The proposed method has provided classification accuracy of 84.79% for classification of human emotions namely happy, neutral, sadness, and fear from EEG signals with Morlet wavelet kernel function of MC-LSSVM. The audio–video stimulus has been used for inducing the emotions in EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of human emotions from EEG signals.
Chapter
Human teachers have capabilities that are still not completely uncovered and reproduced into artificial tutoring systems. Researchers have nevertheless developed many ingenious decision mechanisms which obtain valuable results. Some inroads into natural artificial intelligence have even been made, then abandoned for tutoring systems because of the complexity involved and the computational cost. These efforts toward naturalistic systems are noteworthy and still in general use. In this chapter, we describe how some of this AI is put to work in artificial tutoring systems to reach decisions on when and how to intervene. We then take a particular interest in pursuing the path of “natural” AI for tutoring systems, using human cognition as a model for artificial general intelligence. One tutoring agent built over a cognitive architecture, CTS, illustrates this direction. The chapter concludes on a brief look into what might be the future for artificial tutoring systems, biologically-inspired cognitive architectures.
Conference Paper
Teachers and educators have the mission of transmitting the best of their knowledge using the most from available resources and following established programmatic guidelines. The continuous evolution of technology, proposing new tools and apparatus for knowledge representation and transmission, has offered innumerous options for the mission of teaching. However, more then providing a wide set of experimental setups, or multimedia contents, would be important to determine the best content for each student. Hypothetically, the best content would be defined as the most suited to promote a seamless transmission of knowledge, according to the student status and his readiness to receive those concepts. Human Computer Interfaces can promote a better interoperability between those who teach and those who learn and can better adapt contents and transmission methods to the needs and abilities of each student in class. The present paper proposes a framework for adapting knowledge transmission, either local or remotely, to the needs and circumstances of each teaching act.
Article
After brain researchers have recognized that emotions are crucial for human and animal intelligence, Artificial Intelligence researchers have also started to acknowledge the importance of emotions in the design of intelligent machines. In this article, a review of research work performed in the field of “affective computation” is given, philosophical questions are addressed and first implementation examples are presented and discussed.
Conference Paper
Artificial Emotion model is considered as a key component of to achieve a more effective human-computer interaction and its basic and fundamental are the essentially understanding and expression of natural emotion. The paper presents an emotion computing model Based on Random Process. The model simulates the dynamic process of emotional self-regulation and the dynamic process of emotional transference under the influence of external stimuli. The simulation results show that the change process of emotions simulated by the emotion computing model is consistent with the change rule of human emotion, and it provides a new mechanism for emotional decision-making of emotion robots.
Chapter
With the advent of affective computing and physical computing, technological artefacts are increasingly mediating human emotional relations, and becoming social entities themselves. These technologies on one hand prompt a critical reflection on human-machine relations, and on the other hand offer a fertile ground for imagining new dynamics of emotional relations mediated by technology and materiality. This chapter describes design research drawing on theories of technology, materiality and making. Carried out through fashion and experience design, the practice amplifies the processes of mediation. By creating material playgrounds for technological and human agency, the experiments described here aim to generate knowledge about the emotional self, critical reflection on human-machine relationships, and new imagined emotional relations resulting from the hybridity of humans and technology.
Article
Full-text available
After brain researchers have recognized that emotions are crucial for human and animal intelligence, Artificial Intelligence researchers have also started to acknowledge the importance of emotions in the design of intelligent machines. In this article, a review of research work performed in the field of “affective computation” is given, philosophical questions are addressed and first implementation examples are presented and discussed.
Conference Paper
Humans are very apt at reading emotional signals in other humans and even artificial agents, which raises the question of whether artificial agents need to be emotionally intelligent to ensure effective social interactions. For artificial agents without emotional intelligence might generate behavior that is misinterpreted, unexpected, and confusing to humans, violating human expectations and possibly causing emotional harm. Surprisingly, there is a dearth of investigations aimed at understanding the extent to which artificial agents need emotional intelligence for successful interactions. Here, we present the first study in the perception of emotional intelligence (EI) in robots vs. humans. The objective was to determine whether people viewed robots as more or less emotionally intelligent when exhibiting similar behaviors as humans, and to investigate which verbal and nonverbal communication methods were most crucial for human observational judgments. Study participants were shown a scene in which either a robot or a human behaved with either high or low empathy, and then they were asked to evaluate the agent’s emotional intelligence and trustworthiness. The results showed that participants could consistently distinguish the high EI condition from the low EI condition regardless of the variations in which communication methods were observed, and that whether the agent was a robot or human had no effect on the perception. We also found that relative to low EI high EI conditions led to greater trust in the agent, which implies that we must design robots to be emotionally intelligent if we wish for users to trust them.
Article
Free access to this publication here: http://www.tandfonline.com/eprint/33y334FZXIBtCXaZFj8P/full Affective computing technologies are designed to sense and respond based on human emotions. This technology allows a computer system to process the information gathered from various sensors to assess the emotional state of an individual. The system then offers a distinct response based on what it “felt.” While this is completely unlike how most people interact with electronics today, this technology is likely to trickle into future everyday life. This column will explain what affective computing is, some of its benefits, and concerns with its adoption. It will also provide an overview of its implication in the library setting and offer selected examples of how and where it is currently being used.
Article
Full-text available
The paper engages with what we refer to as “sensitive media,” a concept associated with developments in the overall media environment, our relationships with media devices, and the quality of the media themselves. Those developments point to the increasing emotionality of the media world and its infrastructures. Mapping the trajectories of technological development and impact that the newer media exert on human condition, our analysis touches upon various forms of emergent affect, emotion, and feeling in order to trace the histories and motivations of the sensitization of “the media things” as well as the redefinition of our affective and emotional experiences through technologies that themselves “feel.”
Chapter
In this chapter, the author has reviewed the human workforce of the previous generations by taking into account the features and characteristics of the workforce, which is getting older. The purpose of this chapter is to evaluate the future of the current workforce. The future generation is still unexplored, but it is clear that the coming generation will be a blend of advanced technology and ultra-advanced simulations. The coming years will introduce more advanced artificial intelligence into the workforce that will not just be cognitively intelligent in a retrospective way but also emotionally intelligent. The future human workforce will face a challenge to maintain the requisite skill sets to cope with the constant change.
Conference Paper
In the past two decades ambient intelligence (AmI) has been a focus of research in different fields and from different points of view. It can be defined as an electronic environment consisting of devices capable of recognising people presence and responding in a certain way. The security and privacy in these kind of environments is still a challenge. With employing biometrics for person recognition in ambient intelligence, the devices could distinguish between different people in a non-intrusive way. With this, the privacy issue occurring in ambient intelligence is even more pronounced when combined with biometric recognition. This paper shows a privacy improvement model for biometric person recognition in ambient intelligence using perceptual hashing.
Article
Automatic generation of texts with different sentiment labels has wide use in artificial intelligence applications such as conversational agents. It is an important problem to be addressed for achieving emotional intelligence. In this paper, we propose two novel models, SentiGAN and C-SentiGAN, which have multiple generators and one multi-class discriminator, to address this problem. In our models, multiple generators are trained simultaneously, aiming at generating texts of different sentiment labels without supervision. We propose a penalty-based objective in generators to force each of them to generate diversified examples of a specific sentiment label. Moreover, the use of multiple generators and one multi-class discriminator can make each generator focus on generating its own texts of a specific sentiment label accurately. Experimental results on a variety of datasets demonstrate that our SentiGAN model consistently outperforms several state-of-the-art text generation models in the sentiment accuracy and quality of generated texts. In addition, experiments on conditional text generation tasks show that our C-SentiGAN model has good prospects for specific text generation tasks.
Conference Paper
Full-text available
How can we define and understand the nature of understanding itself? This paper discusses cognitive processes for understanding the world in general and for understanding natural language. The discussion considers whether and how an artificial cognitive system could use a ‘natural language of thought’, and whether the ambiguities of natural language would be a theoretical barrier or could be a theoretical advantage for such a system, in a research approach toward human-level artificial intelligence.
Chapter
This paper reports on several studies in the context of implementing the humanoid social robot Pepper in a financial institution. The results show that the robot can affect the boundary relations between the roles of customer and service worker differently from common-sense expectations. While employees initially feared to be automated away by the robot, the results suggest that the relationship is more likely to change through an emotional bonding to the robot being projected to the company deploying it. Therefore, the robot might, at least partially, assume the role of the service worker as an ambassador of the company, which could recede more into the background in this regard. We discuss the implications of our findings in the context of current literature on the changing boundary relations through robot innovations.
Preprint
To provide consistent emotional interaction with users, dialog systems should be capable to automatically select appropriate emotions for responses like humans. However, most existing works focus on rendering specified emotions in responses or empathetically respond to the emotion of users, yet the individual difference in emotion expression is overlooked. This may lead to inconsistent emotional expressions and disinterest users. To tackle this issue, we propose to equip the dialog system with personality and enable it to automatically select emotions in responses by simulating the emotion transition of humans in conversation. In detail, the emotion of the dialog system is transitioned from its preceding emotion in context. The transition is triggered by the preceding dialog context and affected by the specified personality trait. To achieve this, we first model the emotion transition in the dialog system as the variation between the preceding emotion and the response emotion in the Valence-Arousal-Dominance (VAD) emotion space. Then, we design neural networks to encode the preceding dialog context and the specified personality traits to compose the variation. Finally, the emotion for response is selected from the sum of the preceding emotion and the variation. We construct a dialog dataset with emotion and personality labels and conduct emotion prediction tasks for evaluation. Experimental results validate the effectiveness of the personality-affected emotion transition.
Article
This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.
Article
Full-text available
Since failure, over and over and over again, is a prerequisite to becoming an expert, so to is the ability to persevere and remain motivated through failure. Many researchers creating ITSs have taken the approach of manipulating the task in terms of difficulty, focus, and other parameters in an effort to sustain users' motivation. There are numerous circumstances where this approach is impractical, undesirable, or simply impossible. This task-manipulation approach misses the important opportunity to help users develop skills to deal with failure and frustration. We propose instead an approach that uses affective agents 1 to help users develop metacognitive skills such as affective self-awareness for dealing with failure and frustration. An important element of our approach is the use of one or more affective agents as peer learning companions to facilitate development of empathetic relationships with learners. This paper describes work in progress exploring how characteristics of affective agents can influence perseverance in the face of failure. Introduction: We choose to focus on motivation through failure because of its importance in the learning process. At Stanford's Department of Mechanical Engineering there is a saying that, "Spectacular failure is better than moderate success." (Faste, 1996) This is not an overtly masochistic agenda, rather the message is that if you do not strive for spectacular success you will never achieve it; if you achieve moderate success you have not strived far enough." Kay's version of this sentiment is that, "difficulty should be sought out, as a spur to delving more deeply into an interesting area. An education system that tries to make everything easy and pleasurable will prevent much important learning from happening." (Kay, 1991) In Csikszentmihalyi's words this would be the notion of matching adequate challenge with skill in service of Flow, or optimal experience (Csikszentmihalyi, 1990). In this vein, in their chapter on Motivation and Failure in Educational Systems Design Roger Shank and Adam Neaman describe the utility of simulated Learning By Doing environments, to accelerate the pace of learning through exposure to difficult circumstances that may arise less frequently than in real world situations. This will inevitably accelerate the rate of failure and, if motivation is sustained, the rate of learning as, "novices are exposed to rare, but critical, experiences" (Schank, Neaman, 2001). Shank and Neaman acknowledge that fear of failure is a significant barrier to learning and believe this can be addressed in several ways: minimizing discouragement by lessening humiliation; developing the understanding that consequences of failure will be minimal; and providing motivation that outweighs or distracts the unpleasant aspects of failure. They show that they have been able to sustain the motivation of learners, who care about what they are doing, by providing them access to experts at the time of failure. Through questions, stories, anecdotes and additional experiences learners are given the opportunity to, "expend the effort to explain their failures". Learners are given the opportunity to achieve and become expert (Schank, Neaman, 2001). Many have taken the approach of tailoring the task to the individual user in an effort to maintain motivation, an affective state, Flow or optimal challenge (Malone, 1981; Monk, 2000, Hill et al., 2001). Hill et al. in their paper, Toward the Holodeck, discuss the merits of the creation of a Holodeck like setting in terms of its immersive, believable, and motivating qualities. This terminology is remarkably similar to descriptions of psychological Flow. 1 Affective Agents can sense users' affect and respond with displays of their own affect .
Article
Full-text available
This paper develops a framework of the role of empathy in patient care and explicitly links the framework to important outcomes. Following a definition of empathy and clinical examples, evidence is reviewed on the relevance of empathy to increasing patient satisfaction, increasing adherence with physician recommendations, and decreasing the frequency of medical malpractice suits.
Article
Full-text available
The present paper explores the validity of 16 facial movements (e.g., eyelid widening, lips part) and two psychophysiological responses (e.g., heart rate) as interest-associated behaviors. In a pilot study we selected interesting and uninteresting stimuli, and in two experiments we asked undergraduate volunteers to watch and listen to a series of 4-min film clips and self-report their level of interest. As each participant viewed the films, we videotaped, coded, and scored his or her facial movements and recorded the autonomic responses. Using repeated-measure ANOVAs and correlational analyses, we found support for five upper facial behaviors (eyes closed, number of eye glances, duration of eye glances, eyelid widening, exposed eyeball surface), one lower facial behavior (lips part), and two general head movements (head turns, head stillness) as interest-associated facial movements. The discussion focuses on how these findings confirm, contradict, and clarify the observations of others (e.g., Darwin, Tomkins, Izard).
Conference Paper
Full-text available
This paper investigates the performance and relevance of a set of acoustic features for the task of automatic recognition of af- fect from speech using machine learning techniques. Eighty seven novel and classical features related to loudness, intona- tion, and voice quality, are examined. Using feature selection, the results yield a performance level of 49.4% recognition rate (compared to a human performance rate of 60.4% and a chance level of 20%), while the relevance results show that the more exploratory and novel subset of these features outrank the more classical features in the recognition task. In the active research area of recognition of affect from speech it is of particular interest to obtain acoustic features that provide results closer to those of human recognition abilities. While many now "classic" features have been proposed in the litera- ture, their performance has still fallen short of human recogni- tion, suggesting the need to continue a search for novel features and methods. This paper briefly highlights results from an ex- tensive investigation developing new features, and comparing them side-by-side with classical ones using machine learning techniques. (See (1) for many details omitted in this paper.) Algorithms and features associated with modeling loudness, in- tonation, and voice quality are highlighted in § 2, 3 and 4 re- spectively, and results of the experiments in §5 with some con- cluding remarks in § 6.
Conference Paper
Full-text available
This short paper contains a preliminary description of a novel type of chat system that aims at realizing natural and social communication between distant communication partners. The system is based on an Emotion Estimation module that assesses the affective content of textual chat messages and avatars associated with chat partners that act out the assessed emotions of messages through multiple modalities, including synthetic speech and associated affective gestures. © IFIP International Federation for Information Processing 2005.
Article
Full-text available
This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.
Article
Full-text available
Accounting for a patient's emotional state is integral in medical care. Tele-health research attests to the challenge clinicians must overcome in assessing patient emotional state when modalities are limited (J. Adv. Nurs. 36(5) 668). The extra effort involved in addressing this challenge requires attention, skill, and time. Large caseloads may not afford tele-home health-care (tele-HHC) clinicians the time and focus necessary to accurately assess emotional states and trends. Unstructured interviews with experienced tele-HHC providers support the introduction of objective indicators of patients’ emotional status in a useful form to enhance patient care. We discuss our contribution to addressing this challenge, which involves building user models not only of the physical characteristics of users—in our case patients—but also models of their emotions. We explain our research in progress on Affective Computing for tele-HHC applications, which includes: developing a system architecture for monitoring and responding to human multimodal affect and emotions via multimedia and empathetic avatars; mapping of physiological signals to emotions and synthesizing the patient's affective information for the health-care provider. Our results using a wireless non-invasive wearable computer to collect physiological signals and mapping these to emotional states show the feasibility of our approach, for which we lastly discuss the future research issues that we have identified.
Article
Full-text available
The ability to recognize affective states of a person we are communicating with is the core of emotional intelligence. Emotional intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for successful interpersonal social interaction. This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence - the ability to recognize a user's affective states-in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI-an automatic personalized analyzer of a user's nonverbal affective feedback.
Article
Full-text available
This paper presents a novel way for assessing the affective qualities of natural language and a scenario for its use. Previous approaches to textual affect sensing have employed keyword spotting, lexical affinity, statistical methods, and hand-crafted models. This paper demonstrates a new approach, using large-scale real-world knowledge about the inherent affective nature of everyday situations (such as "getting into a car accident") to classify sentences into "basic" emotion categories. This commonsense approach has new robustness implications.
Article
Full-text available
The ability to recognize emotion is one of the hallmarks of emotional intelligence, an aspect of human intelligence that has been argued to be even more important than mathematical and verbal intelligences. This paper proposes that machine intelligence needs to include emotional intelligence and demonstrates results toward this goal: developing a machine's ability to recognize human affective state given four physiological signals. We describe difficult issues unique to obtaining reliable affective data, and collect a large set of data from a subject trying to elicit and experience each of eight emotional states, daily, over multiple weeks. This paper presents and compares multiple algorithms for feature-based recognition of emotional state from this data. We analyze four physiological signals that exhibit problematic day-to-day variations: the features of different emotions on the same day tend to cluster more tightly than do the features of the same emotion on different days. To handle the daily variations, we propose new features and algorithms and compare their performance. We find that the technique of seeding a Fisher Projection with the results of Sequential Floating Forward Search improves the performance of the Fisher Projection, and provides the highest recognition rates reported to date for classification of affect from physiology: 81% recognition accuracy on eight classes of emotion, including neutral. 1
Article
Prior empirical research has documented the existence of significant individual differences in the quality of the comforting strategies people produce: some people typically produce relatively insensitive, unresponsive comforting strategies while other people produce highly empathic and sensitive comforting strategies. This paper describes the nature and significance of comforting behaviour, discusses methods employed in the study of comforting, and reviews literature regarding the role of social-cognitive processes in the production of sensitive comforting strategies. Two distinct explanations for the relationship between social cognition and sensitive comforting are discussed: the role-taking account and the goal complexity account. These accounts are compared critically and directions for future research are outlined.
This paper presents a system for recognizing naturally occurring postures and associated affective states related to a child's interest level while performing a learning task on a computer. Postures are gathered using two matrices of pressure sensors mounted on the seat and back of a chair. Subsequently, posture features are extracted using a mixture of four gaussians, and input to a 3-layer feed-forward neural network. The neural network classifies nine postures in real time and achieves an overall accuracy of 87.6% when tested with postures coming from new subjects. A set of independent Hidden Markov Models (HMMs) is used to analyze temporal patterns among these posture sequences in order to determine three categories related to a child's level of interest, as rated by human observers. The system reaches an overall performance of 82.3% with posture sequences coming from known subjects and 76.5% with unknown subjects.
Article
Present stages of development and preliminary validation of a self-report instrument for measuring the quality of alliance, the Working Alliance Inventory (WAI). The measure is based on Bordin's (1980) pantheoretical, tripartite (bonds, goals, and tasks) conceptualizaton of the alliance. Results from 3 studies were used to investigate the instrument's reliability and validity and the relations among the WAI scales. Data suggest that the WAI has adequate reliability. The instrument is reliably correlated with a variety of counselor and client self-reported outcome measures. Nontrivial relations were also observed between the WAI and other relationship indicators. Results are interpreted as preliminary support for the validity of the instrument. Although the results obtained in the reviewed studies are encouraging, the high correlations between the 3 subscales of the inventory bring into question the distinctness of the alliance components. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
As an increasing number of new technologies are turning a strong focus on health assessment applications, new engineering and design challenges emerge. Challenges such as inference, modeling, data mining, and feedback for long-term usage arise. This paper argues that embedding empathy into the design of these interactive systems can potentially be vital in the acceptance and success of these types of technologies. This paper discusses three pieces of work that illustrate that designing systems that are intentionally empathetic can play a significant role in creating a better user experience in human-computer interactions.
Article
Prototypes of interactive computer systems have been built that can begin to detect and label aspects of human emotional expression, and that respond to users experiencing frustration and other negative emotions with emotionally supportive interactions, demonstrating components of human skills such as active listening, empathy, and sympathy. These working systems support the prediction that a computer can begin to undo some of the negative feelings it causes by helping a user manage his or her emotional state. This paper clarifies the philosophy of this new approach to human–computer interaction: deliberately recognising and responding to an individual user's emotions in ways, that help users meet their needs. We define user needs in a broader perspective than has been hitherto discussed in the HCI community, to include emotional and social needs, and examine technology's emerging capability to address and support such needs. We raise and discuss potential concerns and objections regarding this technology, and describe several opportunities for future work.
Article
Embodied computer agents are becoming an increasingly popular human–computer interaction technique. Often, these agents are programmed with the capacity for emotional expression. This paper investigates the psychological effects of emotion in agents upon users. In particular, two types of emotion were evaluated: self-oriented emotion and other-oriented, empathic emotion. In a 2 (self-oriented emotion: absent vs. present) by 2 (empathic emotion: absent vs. present) by 2 (gender dyad: male vs. female) between-subjects experiment (N=96), empathic emotion was found to lead to more positive ratings of the agent by users, including greater likeability and trustworthiness, as well as greater perceived caring and felt support. No such effect was found for the presence of self-oriented emotion. Implications for the design of embodied computer agents are discussed and directions for future research suggested.
Article
Use of technology often has unpleasant side effects, which may include strong, negative emotional states that arise during interaction with computers. Frustration, confusion, anger, anxiety and similar emotional states can affect not only the interaction itself, but also productivity, learning, social relationships, and overall well-being. This paper suggests a new solution to this problem: designing human–computer interaction systems to actively support users in their ability to manage and recover from negative emotional states. An interactive affect–support agent was designed and built to test the proposed solution in a situation where users were feeling frustration. The agent, which used only text and buttons in a graphical user interface for its interaction, demonstrated components of active listening, empathy, and sympathy in an effort to support users in their ability to recover from frustration. The agent's effectiveness was evaluated against two control conditions, which were also text-based interactions: (1) users’ emotions were ignored, and (2) users were able to report problems and ‘vent’ their feelings and concerns to the computer. Behavioral results showed that users chose to continue to interact with the system that had caused their frustration significantly longer after interacting with the affect–support agent, in comparison with the two controls. These results support the prediction that the computer can undo some of the negative feelings it causes by helping a user manage his or her emotional state.
Conference Paper
This paper describes a unified approach, based on Gaussian Processes, for achieving sensor fusion under the problematic conditions of missing channels and noisy labels. Under the proposed approach, Gaussian Processes generate separate class labels corresponding to each individual modality. The final classification is based upon a hidden random variable, which probabilistically combines the sensors. Given both labeled and test data, the inference on unknown variables, parameters and class labels for the test data is performed using the variational bound and Expectation Propagation. We apply this method to the challenge of classifying a student's interest level using observations from the face and postures, together with information from the task the students are performing. Classification with the proposed new approach achieves accuracy of over 83%, significantly outperforming the classification using individual modalities and other common classifier combination schemes.
Conference Paper
By integr ati ng sensor s and algori t h ms into syste ms that are adapt e d to the task of inter p r e t i ng emoti onal state s, it is possi ble to enhance our limited ability to perceive and commu ni ca t e signal s related to emotion. Such an augme n t a t i o n woul d have many pot ent i al beneficial uses in set ti ngs such as educati o n, hazar d o u s environ me n t s , or social context s. There are also a numbe r of impor t a n t ethical consi de r a t i o n s that arise with the compu t e r ' s increasi ng ability to recogni ze emoti on s. This paper will survey existi ng appr oac he s to compu t e r ethics relevant to affective compu t i ng. We will categori ze these existi ng appr oac hes by relating them to different met aet hical positions. The goal of this paper is to situa t e our appr o ac h among other appr oac he s in the compu t e r ethics literat u r e and to descri be its met ho d ol ogy in a manne r that practi tione r s can readily apply. The result then of this paper is a proces s for critiqui ng and impr oving affective compu t i ng syst e ms. 1 Unde sirabl e Scenario s The film Hotel Rwanda descri bes historical horr or s of a sor t that have hap pe ne d mor e than once,an d thus may happen again, althoug h with new technol ogy and new individual s involved. At several times thr oug ho u t hist ory, one group has tried to perfor m "ethnic cleansing," and duri ng many scenes of this film people are asked whet he r they are Hutu or Tutsi's. In the film, those who admi t to being (or are expose d as being) Tutsi are carted off, and event ually a million Tutsi' s are brut ally mur de r e d. Imagine how much more "efficient" this kind of interr oga t i on proces s coul d be if the per pet r a t o r s could point a non - cont act "lie detect or" at each per so n while ques ti oni ng the m abou t their race (or other unwelco me beliefs.) Lie detect or s typically sense physiological changes associat e d with increased stre ss and cognitive load (presu mi ng it is har der to lie than to tell the trut h). While hones t y is a virtue, and we'd like to see it practi se d more, it is also pos si ble to imagi ne cases where a great er virtue might be in conflict with it. If such devices became easy to use in a wides pr e a d reliable fashion, they will become easier to misus e as well. It is not har d to imagi ne an evil dictat or s hi p using such a device routi nely, per ha p s showing up at your home and pointi ng it at you, while asking if you agree with their new regime' s policies or not, and then proceedi ng to trea t people differe nt l y on the basis of their affective respo n s e s to such ques tioni ng.
Article
Emotion-specific activity in the autonomic nervous system was generated by constructing facial prototypes of emotion muscle by muscle and by reliving past emotional experiences. The autonomic activity produced distinguished not only between positive and negative emotions, but also among negative emotions. This finding challenges emotion theories that have proposed autonomic activity to be undifferentiated or that have failed to address the implications of autonomic differentiation in emotion.
Article
The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts.