ArticlePDF Available

EMoto: Emotionally engaging interaction

Authors:

Abstract and Figures

The eMoto is an emotional text messaging service built on top of a SonyEricsson P900 mobile terminal. The goal of this service is to provide users with means to emotionally enhance their SMS messages. The user first writes the textual content of the message and then adjusts the affective background to fit the emotional expression she wants to achieve. By combining two basic movements that together can render an infinite amount of affective gestures, the user will move around in this circumplex plane. Technically, researchers have made the plane 100 times larger than the screen of the mobile phone. This, in combination with the affective gestures, will have the user experience a kaleidoscopic effect when choosing between the vast amounts of emotional expressions.
Content may be subject to copyright.
DESIGN SKETCH
Petra Fagerberg ÆAnna Sta
˚hl ÆKristina Ho
¨o
¨k
eMoto: emotionally engaging interaction
Received: 30 October 2003 / Accepted: 27 April 2004 / Published online: 5 August 2004
ÓSpringer-Verlag London Limited 2004
Through the eMoto design, we intend to emotionally
engage users both cognitively and physically using a
tangible interface, allowing for affective gestures that are
mirrored in the expressions produced by the system. A
questionnaire sent to 66 potential users showed a need
for richer emotional expressiveness in text messaging in
mobile phones than what is available today. Emotions
are expressed not only through what is said, but also
through body gestures and tone of voice—mediums not
available in this context.
eMoto is an emotional text messaging service built on
top of a SonyEricsson P900 mobile terminal. The goal of
this service is to provide users with means to emotionally
enhance their SMS messages. The user first writes the
textual content of the message and then adjusts the
affective background to fit the emotional expression she
wants to achieve. The adjustments are done through
affective gestures (Fig. 1) that will render an animated
background acting as an emotional expression to the
user’s text message (Figs. 2, 3, 4, 5 and 6). The P900
terminal is used with a stylus pen. We have equipped this
pen with two sensors that will recognize the affective
gestures: an accelerometer and a pressure sensor. In a
first prototype, the extended stylus is connected to the
serial port of a stationary PC, which in turn communi-
cates with the P900 terminal—in the final prototype, this
will be a direct wireless communication channel between
the stylus and P900 terminal.
In this specific design, our aim is to let users con-
sciously express their emotions. This should not entail a
simple one-to-one mapping of emotions to specific
expressions. Instead, we build the interaction on the fact
that emotions should not be seen as singular, discrete
states, but instead as processes that blend into one an-
other. Through creating the interaction model based on
Russell’s circumplex model of affect [3] (Fig. 7), we
could create a system that allows users to choose emo-
tional expressions that best suit their messages. Without
explicitly naming each emotion in the interaction, we
maintain open interpretations of emotional expressions.
In Russell’s model, emotions are seen as a combination
of arousal and valence. By combining two basic move-
ments that together can render an infinite amount of
affective gestures (Fig. 8), the user will move around in
this circumplex plane. Technically, we have made the
plane 100 times larger than the screen of the mobile
phone (Fig. 9). This, in combination with the affective
gestures, will have the user experience a kaleidoscopic
effect when choosing between the vast amounts of
emotional expressions. We call this the affective gestural
plane model. The two basic movements for construction
of affective gestures are natural but designed expressions,
extracted from an analysis of body movements [1]. The
arousal of emotions is communicated through move-
ment, where intense shaking of the stylus will increase
arousal and a more swinging movement will imply lower
arousal (Fig. 8). To navigate to emotions with negative
valence, the user has to increase the pressure on the
stylus, while less pressure will transfer the user to emo-
tions with positive valence (Fig. 8).
The affective gestures are closely connected to the
affective feedback that the user receives as visual output.
The characteristics of emotional expressions found in
the analysis of body movements are represented through
colours, shapes and animations in the design of the
affective feedback. Colours are used to express arousal,
where red represents emotions with high arousal and
blue is calm and peaceful [2]. The shapes of the animated
objects in the areas containing high arousal are small
and can, therefore, render animations and patterns that
are energetic, quick and spreading. Moving around the
circle towards less energy and calmer expression, the
shapes get bigger and more connected, rendering slower
and more billowing animations. Shapes placed on the
positive side of the circle are softer and more round,
while shapes placed on the negative side are more
P. Fagerberg ÆA. Sta
˚hl (&)ÆK. Ho
¨o
¨k
Stockholm University/KTH, DSV,
Forum 100, 164 40 Kista, Sweden
E-mail: annas@dsv.su.se
Pers Ubiquit Comput (2004) 8: 377–381
DOI 10.1007/s00779-004-0301-z
Fig. 1 The tangible interface; interacting through affective gestures
using the stylus
Fig. 2 One way of expressing quite relaxed, through a green/yellow
colour and animated objects that are quite big and connected in
their shapes
Fig. 3 One way of expressing more relaxed than in Fig. 2, through
deeper green colours that are closer to one another and larger
animated shapes
Fig. 4 One way of expressing a little excited, through a red/orange
background and a few, small, round objects with fast movements in
the background
Fig. 5 One way of expressing more excited than in Fig. 4, through
a deeper red colour in the background and with even larger and
more animated objects
Fig. 6 One way of expressing tired/bored through dark blue
colours, big, connected shapes and slow animations
378
angular and sharp. The emotional expressions are
stronger along the outer border of the circle while
weaker towards the middle; this is represented through
less depth in colours and fewer animated elements
(Fig. 10).
A user study of the affective output has just been
completed. A few expressions need to be redesigned, for
example, negative emotions with high arousal were
rendered in too bright colours and some of the shapes
were too depictive and thereby hindered users from
reading their own interpretation into them. The big
picture, however, showed a great interest in this new way
of communicating emotions and that users perceived
most expressions as intended.
Fig. 7 Russell’s circumplex model of affect [3]
Fig. 8 The affective gestural
plane model
379
Fig. 9 The kaleidoscopic effect
of the interactive feedback
when navigating the affective
background circle
Fig. 10 The affective
background circle, showing
how the colours, shapes and
sizes of objects vary together
with Russell’s circumplex model
of affect
380
References
1. Fagerberg P, Sta
˚hl A, Ho
¨o
¨k K (2003) Designing gestures for
affective input: an analysis of shape, effort and valence. In:
Proceedings of the 2nd international conference on mobile
ubiquitous multimedia (MUM 2003), Norrko
¨ping, Sweden,
December 2003
2. Itten J (1971) Kunst der Farbe. Otto Maier Verlag, Ravensburg,
Germany
3. Russell JA (1980) A circumplex model of affect. J Pers Soc
Psychol 39(6):1161–1178
381
... By studying more specifcally how the term expressivity is adopted throughout the HCI literature, we see that the notion of expressivity embraces rather diferent perceptions of the term. Expressivity has been considered to indicate that actions towards products or systems can carry afective and emotional content [9,20]; it has been seen as relating to the visceral capacity of the human body [10] and to bodily movements such as touch, posture and gesture and their unique qualities [44]. Beyond human expressiveness it is also used to emphasize the diversity of qualities within modalities or domains such as music [17,73], as well as the interaction between multiple modalities [2]. ...
... In their defnition of expressivity, it "involves the idea of potential variation instantiated by the consistent constitutive structure", in this case of gestures [10]. Therefore, enabling technologies have been developed that can recognize a great variety of gestures [20]. In this line of thinking, the agenda is concerned generally with broadening the opportunities for input. ...
... This can be in the form of self-refection, as in the case of Afective Diary, which analyses various types of behavioral and environmental data to support personal refection [42]. Or in the form of supporting human-to-human communication as is the case with eMoto [20], InTouch [6] and Feather, Scent and Shaker [66]. eMoto allows people to communicate their emotions through a mobile phone application that adapts the imagery of a message by interpreting the gestures of its user [20]. ...
... Music emotion recognition is a well-researched area that includes many frameworks for detecting emotions in music, the most noted being Russell's two-dimensional Valence-Arousal (V-A) model (Russell, 1980). There are many other emotion models such as categorical and dimensional psychometric models which can be effectively used for analyzing the emotions expressed by a piece of music (Kim et al., 2010), (Li and Ogihara, 2003), (Sorussa et al., 2020a), (Fagerberg et al., 2004), (Yang and Chen, 2012), (Yang et al., 2008b). Most of these researches have used a few music features such as timbre, tonality, rhythm, etc.. ...
... Failure to understand the facial expressions exhibited by the students can hamper the understanding of the impact of a non-conducive style or pace of delivery and hence the levels of student learning. Russell's (1980) two-dimensional "circumplex model of affect" is widely employed in the field of user emotion modeling, in which emotions are viewed as combinations of arousal and valence [26][27][28][29][30][31]. It has also been demonstrated that the OCC [32] psychological constructivist approach model should be utilized as the standard cognitive appraisal model for emotions. ...
Article
Full-text available
Various studies have measured and analyzed learners’ emotions in both traditional classroom and e-learning settings. Learners’ emotions can be estimated using their text input, speech, body language, or facial expressions. The presence of certain facial expressions has shown to indicate a learner’s levels of concentration in both traditional and e-learning environments. Many studies have focused on the use of facial expressions in estimating the emotions experienced by learners. However, little research has been conducted on the use of analyzed emotions in estimating the learning affect experienced. Previous studies have shown that online learning can enhance students’ motivation, interest, attention, and performance as well as counteract negative emotions, such as boredom and anxiety, that students may experience. Thus, it is crucial to integrate modules into an existing e-learning platform to effectively estimate learners’ learning affect (LLA), provide appropriate feedback to both learner and lecturers, and potentially change the overall online learning experience. This paper proposes a learning affect estimation framework that employs relational reasoning for facial expression recognition and adaptive mapping between recognized emotions and learning affect. Relational reasoning and deep learning, when used for autoanalysis of facial expressions, have shown promising results. The proposed methodology includes estimating a learner’s facial expressions using relational reasoning; mapping the estimated expressions to the learner’s learning affect using the adaptive LLA transfer model; and analyzing the effectiveness of LLA within an online learning environment. The proposed research thus contributes to the field of facial expression recognition enhancing online learning experience and adaptive learning.
... Failure to understand the facial expressions exhibited by the students can hamper the understanding of the impact of a non-conducive style or pace of delivery and hence the levels of student learning. Russell's (1980) two-dimensional "circumplex model of affect" is widely employed in the field of user emotion modeling, in which emotions are viewed as combinations of arousal and valence [26][27][28][29][30][31]. It has also been demonstrated that the OCC [32] psychological constructivist approach model should be utilized as the standard cognitive appraisal model for emotions. ...
Article
Full-text available
Relational Networks (RN), as one of the most widely used relational reasoning techniques, have achieved great success in many applications such as action and image analysis, speech recognition and text understanding. The use of relational reasoning via RN in neural networks has often been used in recent years. In these instances, RN is composed of various deep learning-based algorithms in simple plug-and-play modules. This is quite advantageous since it circumvents the need for features engineering. This paper surveys the emerging research of deep learning models that make use of RN in tasks such as Natural Language Processing (NLP), Action Recognition, Temporal Relational Reasoning as well as Facial Emotion Recognition (FER). Since, RNs are easy to integrate they have been used in various tasks such as NLP, Recurrent Neural Networks (RNN), Action Recognition, Image Analysis, Object Detection, Temporal Relational Reasoning, as well as for FER. This is due to the fact that RNs use bidirectional LSTM and CNN to solve relational reasoning problems at character and word level. In this paper a comparative review of all relational reasoning-based RN models using deep learning techniques is presented.
... Music emotion recognition is a well-researched area that includes many frameworks for detecting emotions in music, the most noted being the two-dimensional Valence-Arousal (V-A) model [11]. There are many other emotion models such as categorical and dimensional psychometric models which can be effectively used for analyzing the emotions expressed by a piece of music [12], [13], [14], [15], [16], [17]. ...
... Many of the previous studies that targeted data from healthy individuals referred to Russell's circumplex model [76] to select emotions [6,19,25,30,33,34,[77][78][79][80]; however, the studies that addressed patients with neurological disorders did not refer to the circumplex model [36][37][38]40,41]. For example, Kumfor et al. did not explain the reasons for selecting their target emotions [40]. ...
Article
Full-text available
As the number of patients with Alzheimer's disease (AD) increases, the effort needed to care for these patients increases as well. At the same time, advances in information and sensor technologies have reduced caring costs, providing a potential pathway for developing healthcare services for AD patients. For instance, if a virtual reality (VR) system can provide emotion-adaptive content, the time that AD patients spend interacting with VR content is expected to be extended, allowing caregivers to focus on other tasks. As the first step towards this goal, in this study, we develop a classification model that detects AD patients' emotions (e.g., happy, peaceful, or bored). We first collected electroencephalography (EEG) data from 30 Korean female AD patients who watched emotion-evoking videos at a medical rehabilitation center. We applied conventional machine learning algorithms, such as a multilayer perceptron (MLP) and support vector machine, along with deep learning models of recurrent neural network (RNN) architectures. The best performance was obtained from MLP, which achieved an average accuracy of 70.97%; the RNN model's accuracy reached only 48.18%. Our study results open a new stream of research in the field of EEG-based emotion detection for patients with neurological disorders.
Chapter
In 2019 in Portugal consumed 1265 millions of litres of bottled mineral water and as a result, 923 million of packaging. A significant amount of waste from the plastic bottles, plastic caps and labels arises ecological problems. Companies and consumers are now paying more attention to environmental issues and are changing their practices and behaviours to achieve a better sustainable world. This study analyses how sustainable is Portuguese water bottles in terms of material usage (plastic bottle, plastic cap, label, bottle thickness and amount of water) and provide some tips on how package designers can redesign these bottles and labels for a more sustainable concept. In order to analyse the influence of the bottle thickness, tensile testing was performed.KeywordsSustainabilityPlastic Water bottlesSustainable Package DesignMechanical Testing
Article
Aesthetic principle not only concerns the perception of visual beauty, but most crucially, the cognitive-affective responses derived from experiencing such stimuli. Despite this knowledge, user experience (UX) designers often prioritise beauty over intended end-user affect. This reversed order of prioritisation contradicts the sequential UX design process, leading to unexpected end-user perceptions and responses. The challenge in shaping UX before user interface (UI) design is that there first must be prior knowledge of aesthetic affect. A study with 1,782 worldwide participants was conducted evaluating affective user-responses to 43 atomic aesthetics using 153,252 data points and presented as affect ratings (ARs). Results demonstrated high affective resonance amongst aesthetics evaluated, suggesting aesthetic AR may be a viable method of improving the UX design process by influencing user perceptions, responses and actions at a non-conscious level. This is the first of a series of studies in the direction of aesthetic semantics.
Article
Full-text available
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Article
Full-text available
We discuss a user-centered approach to incorporating affective expressions in interactive applications, and argue for a design that addresses both body and mind. In particular, we have studied the problem of finding a set of affective gestures. Based on previous work in movement analysis and emotion theory (Davies, Laban and Lawrence, Russell), and a study of an actor expressing emotional states in body movements, we have identified three underlying dimensions of movements and emotions: shape, effort and valence. From these dimensions we have created a new affective interaction model, which we name the affective gestural plane model. We applied this model to the design of gestural affective input to a mobile service for affective messages.
Article
Full-text available
Factor-analytic evidence has led most psychologists to describe affect as a set of dimensions, such as displeasure, distress, depression, excitement, and so on, with each dimension varying independently of the others. However, there is other evidence that rather than being independent, these affective dimensions are interrelated in a highly systematic fashion. The evidence suggests that these interrelationships can be represented by a spatial model in which affective concepts fall in a circle in the following order: pleasure (0), excitement (45), arousal (90), distress (135), displeasure (180), depression (225), sleepiness (270), and relaxation (315). This model was offered both as a way psychologists can represent the structure of affective experience, as assessed through self-report, and as a representation of the cognitive structure that laymen utilize in conceptualizing affect. Supportive evidence was obtained by scaling 28 emotion-denoting adjectives in 4 different ways: R. T. Ross's (1938) technique for a circular ordering of variables, a multidimensional scaling procedure based on perceived similarity among the terms, a unidimensional scaling on hypothesized pleasure–displeasure and degree-of-arousal dimensions, and a principal-components analysis of 343 Ss' self-reports of their current affective states. (70 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)