Emotions are now recognized as complex human control systems, crucial to decision making, creativity, playing and learning. Affective technologies may offer improved interaction and commercial promise. In the past, research has focused on technical development work, leaving many questions about user preferences unanswered. For this user-centered study, 60 participants played a simple 'word ladder' game under different controlled conditions. Using 2 x 2 factorial design, and a Wizard of Oz scenario, half the participants interacted with a system that adapted on the basis of the user's emotional expression and half were told the system could react to their emotional expressions. We established that when using an apparently affective system, users perform significantly better and report themselves as feeling significantly happier. We also discuss behavioral responses to the different conditions. These results are relevant to the design of future affective systems.
"A system with close to 100% accuracy under laboratory conditions (e.g. by relying on prototypical emotions, as often carried out) will still -in most cases -perform unsatisfactory in real-world scenarios. Thus, actual use-case studies   must be performed to evaluate the performance and the acceptance of such systems in addition to the objective measures like accuracy. "
[Show abstract][Hide abstract] ABSTRACT: Automatic detection of the level of human interest is of high relevance for many technical applications, such as automatic customer care or tutoring systems. However, the recognition of spontaneous interest in natural conversations independently of the subject remains a challenge. Identification of human affective states relying on single modalities only is often impossible, even for humans, since different modalities contain partially disjunctive cues. Multimodal approaches to human affect recognition generally are shown to boost recognition performance, yet are evaluated in restrictive laboratory settings only. Herein we introduce a fully automatic processing combination of Active–Appearance–Model-based facial expression, vision-based eye-activity estimation, acoustic features, linguistic analysis, non-linguistic vocalisations, and temporal context information in an early feature fusion process. We provide detailed subject-independent results for classification and regression of the Level of Interest using Support-Vector Machines on an audiovisual interest corpus (AVIC) consisting of spontaneous, conversational speech demonstrating “theoretical” effectiveness of the approach. Further, to evaluate the approach with regards to real-life usability a user-study is conducted for proof of “practical” effectiveness.
[Show abstract][Hide abstract] ABSTRACT: Affective technologies have potential to enhance human-computer interaction (HCI). The problem is that much development is
technically, rather than user driven, raising many unanswered questions about user preferences and opening new areas for research.
People naturally incorporate emotional messages during interpersonal communication with other people, but their use of holistic
communication including emotional displays during HCI has not been widely reported. Using Wizard-of-Oz (WOZ) methods, experimental
design and methods of sequential analysis from the social sciences, we have recorded, analyzed and compared emotional displays
of participants during interaction with an apparently affective system and a standard, non-affective version. During interaction,
participants portray extremely varied, sometimes intense, ever-changing displays of emotions and these are rated as significantly
more positive in the affective computer condition and as significantly more intense in the told affective condition. We also
discuss behavioural responses to the different conditions. These results are relevant to the design of future affective systems.
Affective Computing and Intelligent Interaction, First International Conference, ACII 2005, Beijing, China, October 22-24, 2005, Proceedings; 01/2005
[Show abstract][Hide abstract] ABSTRACT: In this paper, we propose that, in order to improve customer satisfaction, we need to incorporate communication modes (e.g., speech act) in the current standards of web services specifications. We show that with the communication modes, we can estimate various affects on service consumers during their interactions with web services. With this information, a web-service management system can automatically prevent and compensate potential negative affects, and even take advantage of positive affect.
PRICAI 2006: Trends in Artificial Intelligence, 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, August 7-11, 2006, Proceedings; 01/2006
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.