ArticlePDF Available

Evaluating affective interactions: Alternatives to asking what users feel

Authors:

Abstract

In this paper, we advocate the use of behavior-based methods for use in evaluating affective interactions. We consider behavior-based measures to include both measures of bodily movements or physiological signals and task-based performance measures.
A preview of the PDF is not available
... Thus, in each of these cases emotion identification is based on a posteriori user reports, observation by the evaluators, or invasive methods. Even though the detection of affective states with these methods can be precise, emotions experienced by users may have been produced or altered by the very measurement instruments or conditioned by the evaluation context (Picard & Daily, 2005). ...
Article
Full-text available
p>In this paper, we introduce a model that aims to provide guidelines that will strengthen the evaluation of interactive systems by assisting in the identification and analysis of emotions evoked by users during system usage. We briefly discuss related projects that have included emotions in the evaluation of interactive systems. The research presented here is a preliminary work towards the inclusion of emotions during the evaluation of interactive systems. Our model is presented including details of each of its phases. We discussed preliminary results of the application of the model to evaluating a Virtual Learning Environment. Our approach comprises four major phases: Selection of relevant emotions; analysis of relationships between emotions and interactive systems; selection of detection mechanisms; and application of evaluation methods.</p
... For example, studies using smartwatches that detect heart beats and light exposure have found that happiness has an important association with these parameters 19 . Against this background, rapidly evolving new technologies that enable passive monitoring of these additional variables represent a promising avenue for more effective SWB monitoring tools in the future [20][21][22] . ...
Article
Full-text available
It is widely assumed that the longer we spend in happier activities the happier we will be. In an intensive study of momentary happiness, we show that, in fact, longer time spent in happier activities does not lead to higher levels of reported happiness overall. This finding is replicated with different samples (student and diverse, multi-national panel), measures and methods of analysis. We explore different explanations for this seemingly paradoxical finding, providing fresh insight into the factors that do and do not affect the relationship between how happy we report feeling as a function of how long it lasts. This work calls into question the assumption that spending more time doing what we like will show up in making us happier, presenting a fundamental challenge to the validity of current tools used to measure happiness.
... So it leads to many principal problems. For instance, in videos which are supposed to evoke fear or anxietyrelated emotions, participants are reluctant to show their real feelings [13]. ...
Article
In this paper, we suggest an efficient, accurate and user-friendly brain-computer interface (BCI) system for recognizing and distinguishing different emotion states. For this, we used a multimodal dataset entitled “MAHOB-HCI” which can be freely reached through an email request. This research is based on electroencephalogram (EEG) signals carrying emotions and excludes other physiological features, as it finds EEG signals more reliable to extract deep and true emotions compared to other physiological features. EEG signals comprise low information and signal-to-noise ratios (SNRs); so it is a huge challenge for proposing a robust and dependable emotion recognition algorithm. For this, we utilized a new method, based on the matching pursuit (MP) algorithm, to resolve this imperfection. We applied the MP algorithm for increasing the quality and SNRs of the original signals. In order to have a signal of high quality, we created a new dictionary including 5-scale Gabor atoms with 5000 atoms. For feature extraction, we used a 9-scale wavelet algorithm. A 32-electrode configuration was used for signal collection, but we used just eight electrodes out of that; therefore, our method is highly user-friendly and convenient for users. In order to evaluate the results, we compared our algorithm with other similar works. In average accuracy, the suggested algorithm is superior to the same algorithm without applying MP by 2.8% and in terms of f-score by 0.03. In comparison with corresponding works, the accuracy and f-score of the proposed algorithm are better by 10.15% and 0.1, respectively. So as it is seen, our method has improved past works in terms of accuracy, f-score and user-friendliness despite using just eight electrodes.
... Picard et al. suggested using haptic interaction tools to evaluate affective interactions [30]. This study uses body measures which are able to provide additional insight into an emotional state of a user without relying on an individual's cognitive assessment of the emotional state. ...
Article
Full-text available
Intelligent robot companions contribute significantly in improving living standards in the modern society. Therefore, human-like decision making skills are sought after during the design of such robots. On the one hand, such features enable the robot to be easily handled by its non-expert human user. On the other hand, the robot will have the capability of dealing with humans without causing any disturbance by the robot’s behavior. Mimicing human emotional intelligence is one of the best and reasonable ways of laying the foundation for robotic emotional intelligence. As robots are widely deployed in social environments, perception of the situation or intentions of a user prior to an interaction is required to be proactive. Proactive robots are required to understand what is communicated by the human body language prior to approaching a human. Social constraints in an interaction could be demolished by this assessment in this regard. In this review, we incorporate various findings of human–robot interaction, social robotics and psychophysiology to assess intelligent systems which were capable of evaluating the emotional state of humans prior to an interaction. Second, we identify the cues and evaluation techniques that were utilized by such intelligent agents to simulate and evaluate the suitability of a proactive interaction. Available literature has been evaluated to distinguish limitations of existing methods and suggest possible improvements. These limitations, guiding principles to be adhered to and suggested improvements, are presented as an outcome of the review.
Chapter
In this chapter, the topic of evaluating learner experience in serious games is discussed with respect to four different dimensions: gaming, learning, using and context with a special focus on using multimodal data. After reviewing relevant research fields, the steps involved in a serious games evaluation process is investigated and relevant evaluation studies are reviewed with emphasis on the use of different modalities for recording and assessing in-game interactions. Finally, a theoretical framework (LeGUC) is proposed defining parameters related to the four dimensions discussed which can be observed during evaluation studies of serious games and how they relate to logged in-game interactions. The framework is based on relevant literature as well as a conducted observational user study.
Chapter
Nonverbal interaction includes most of what we do; the interaction resulted from other means than words or their meaning. In computer-mediated interaction, the richness of face-to-face interaction has not been completely achieved. However, multiuser virtual reality, a computer-generated environment that allows users to share virtual spaces and virtual objects through their graphic representation, is a highly visual technology in which nonverbal interaction is better supported when compared with other media. Still, like in any technology media, interaction is accomplished distinctively due to technical and design issues. In collaborative virtual reality, the analysis of nonverbal interaction represents a helpful mechanism to support feedback in teaching or training scenarios, to understand collaborative behavior, or to improve this technology. This chapter discussed the characteristics of nonverbal interaction in virtual reality, presenting advances in the automatic interpretation of the users’ nonverbal interaction while a spatial task is collaboratively executed.
Chapter
This paper presents an AI-based co-creative system in which the interaction model focuses on emotional feedback, that is, the decisions about the creative contribution from the AI agent is based on the emotion detected in the human co-creator. In human-human collaboration, gestures, verbal communications, and emotional responses are among the general communication strategies used to shape the interactions between the collaborators and negotiate the contributions. Emotional feedback allows human collaborators to passively communicate their experience and their perception of the process without distracting the flow of the task. In human-human co-creative collaboration, participants interact and contribute to the task based on their perception of the collaboration over time. In designing human-AI co-creative collaboration, we address two challenges: (1) perceiving the user’s cognitive state to determine the dynamics of collaboration, such as whether the system should lead, follow, or wait, and (2) deciding what the agent should contribute to the artifact. This paper presents a model of an AI agent that addresses these challenges and the results of our study of participants that interact with the co-creative agent.
Article
Full-text available
Any viable algorithm to infer affective states of individuals with autism requires natural and reliable data in real time and in an uncontrolled environment. For this purpose, this study provides a new natural-spontaneous affective-cognitive dataset based on facial expressions, eye gaze, and head movements for adult students with and without Asperger syndrome (AS). The data gathering and collecting in a computer-based learning environment is one of the significant areas, which has attracted researchers' attention in affective computing applications. Due to the important impact of emotions on students learning outcome and their performance, the dataset included a range of affective-cognitive states which goes beyond basic emotions. This study reports the methodology that was used in data collection and annotation. Description and comparison of other available datasets were summarized, and also the study presents the results that were concluded in more details. In addition, some challenges were inherent to this study.
Article
Full-text available
We present a new hardware system using biosignals for musical applications. Architecture and design decisions made in its realization are described as well as initial experiments that have been conducted. The device is modular, wireless, and network based, permitting a wide range of applications. It is a reasonable cost, open platform for research in performance, musical gesture recognition, and wearable music systems.
Article
Full-text available
We quantified physical measures of upper extremity stress, force, posture, and muscle activity as fourteen subjects completed a five page web-based survey. After completing one of the pages, the survey would prompt the user indicating they had completed that page incorrectly. Once acknowledged by the user, the system would redisplay the page with all of the user's responses deleted. Responses to completing the page varied across individuals. Based on a subject's response to a questionnaire, they were grouped into a high or low response group, with the high response group expressing more dissatisfaction with the page design. Force applied to the side of the mouse was higher (1.25N) for the 15 seconds after the display of the error message than the 15s before the error (0.88N) for the high response group (p=0.02). No difference was observed for the low response group. Similarly, the average wrist extensor muscle activity for both the ECR and ECU was 1 to 2 percent MVC higher for the 15s after the error message than compared to the 15s prior to the error for the high response group (p=0.01). Average activity was also 1 to 2 % higher for the second time completing the page compared to the first time. These results suggest that software design and usability can increase exposure to physical risk factors during computer work dependent upon a person's assessment of ease of use.
Conference Paper
Full-text available
We develop an automatic system to analyze subtle changes in upper face expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal image sequence. Our system recognizes fine-grained changes in facial expression based on Facial Action Coding System (FACS) action units (AUs). Multi-state facial component models are proposed for tracting and modeling different facial features, including eyes, brews, cheeks, and furrows. Then we convert the results of tracking to detailed parametric descriptions of the facial features. These feature parameters are fed to a neural network which recognizes 7 upper face action units. A recognition rate of 95% is obtained for the test data that include both single action units and AU combinations
Article
Full-text available
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.
Article
The galvactivator is a glove-like wearable device that senses the wearer's skin conductivity and maps its values to a bright LED display, making the skin conductivity level visible. Increases in skin conductivity tend to be good indicators of physiological arousal --- causing the galvactivator display to glow brightly. The new form factor of this sensor frees the wearer from the traditional requirement of being tethered to a rack of equipment; thus, the device facilitates study of the skin conductivity response in everyday settings. We rece ntly built and distributed over 1000 galvactivators to audience members at a daylong symposium. To explore the communication potential of this device, we collected and analyzed the aggregate brightness levels emitted by the devices using a video camera focused on the audience. We found that the brightness tended to be higher at the beginning of presentations and during interactive sessions, and lower during segments when a speaker spoke for long periods of time. We also collected anecdotes from participants about their interpersonal uses of the device. This paper describes the construction of the galvactivator, our experiments with the large audience, and several other potentially useful applications ranging from facilitation of conversation between two pe ople, to new ways of aiding autistic children.
Article
This paper presents a system for recognizing naturally occurring postures and associated affective states related to a child's interest level while performing a learning task on a computer. Postures are gathered using two matrices of pressure sensors mounted on the seat and back of a chair. Subsequently, posture features are extracted using a mixture of four gaussians, and input to a 3-layer feed-forward neural network. The neural network classifies nine postures in real time and achieves an overall accuracy of 87.6% when tested with postures coming from new subjects. A set of independent Hidden Markov Models (HMMs) is used to analyze temporal patterns among these posture sequences in order to determine three categories related to a child's level of interest, as rated by human observers. The system reaches an overall performance of 82.3% with posture sequences coming from known subjects and 76.5% with unknown subjects.
Article
Human speech provides a natural and intuitive interface for both communicating with humanoid robots as well as for teaching them. In general, the acoustic pattern of speech contains three kinds of information: who the speaker is, what the speaker said, and how the speaker said it. This paper focuses on the question of recognizing affective communicative intent in robot-directed speech without looking into the linguistic content. We present an approach for recognizing four distinct prosodic patterns that communicate praise, prohibition, attention, and comfort to preverbal infants. These communicative intents are well matched to teaching a robot since praise, prohibition, and directing the robot's attention to relevant aspects of a task, could be used by a human instructor to intuitively facilitate the robot's learning process. We integrate this perceptual ability into our robot's “emotion” system, thereby allowing a human to directly manipulate the robot's affective state. This has a powerful organizing influence on the robot's behavior, and will ultimately be used to socially communicate affective reinforcement. Communicative efficacy has been tested with people very familiar with the robot as well as with naïve subjects.