Conference PaperPDF Available

Ten Emotion Heuristics: Guidelines for assessing the user’s affective dimension easily and cost-effectively

Ten Emotion Heuristics: Guidelines for assessing the
user’s affective dimension easily and cost-effectively
Eva de Lera
Universitat Oberta de Catalunya
Avinguda Tibidabo 39-43
08003 Barcelona
+34 3 253 23 00
Muriel Garreta-Domingo
Universitat Oberta de Catalunya
Avinguda Tibidabo 39-43
08003 Barcelona
+34 3 253 23 00
Emotional appeal is a key dimension in user experience that
often goes unmeasured in most user-centered design projects.
This paper presents preliminary work for developing a set of
guidelines for efficiently, easily and cost-effectively assessing
the users’ affective state by evaluating their expressive reactions
during an interface evaluation process. The evaluation of this
dimension complements the analysis of the objective and
quantitative data gathered through usability tests and the
subjective feedback provided through post-test questionnaires.
Categories and Subject Descriptors
H.5.2 [Information interfaces and presentation] User Interfaces
General Terms
Measurement, Documentation, Design, Experimentation,
Affective computing, human-computer interaction, evaluation,
usability testing, emotions, heuristics, user-centered design.
Emotion is a key aspect in user experience since measuring it
helps us understand the user’s level of engagement and
motivation. As Spillers [20] writes, “emotions govern the
quality of interactions with a product in the user’s environment
and relate directly to appraisal of the user experience. Users
generate emotion as a way to minimize errors, interpret
functionality, or obtain relief from the complexity of a task.”
Therefore, accounting for emotional cues during an interface
evaluation process provides usability practitioners, researchers
and interactive designers with valuable information. Nowadays,
there are several software applications that automatically
capture facial expressions and eye gaze which provide key
information to the practitioner. However, this paper defines an
observational system to help evaluate interactions in an easier,
time and cost effective manner through observation of the users.
Our tool allows researchers and practitioners to take into
account emotional measures such as gestures and oral
expressions without the requirement of extra software and
hardware. Measuring user’s emotion is both difficult and costly
[3, 11], therefore, most interface evaluation efforts focus on
cognitive and subjective aspects, neglecting the affective
dimension. To date, usability practitioners have mainly relied
on performance user test data and on the subjective information
from the post-test questionnaire to measure user satisfaction
and emotions. Measuring errors, time and other objective
measures provides key but partial information. Feedback
surveys or questionnaires provide only partial and often
unreliable data, especially considering that users tend to give a
positive evaluation to avoid blaming the person who developed
the application or simply to minimize the time spent on the
evaluation. Moreover, analyzing a questionnaire is a subjective
measure of the user’s feelings and emotions; therefore it is not a
dependable methodology to measure affect. As a result, most
common evaluation methods (not considering facial recognition
software and other advanced techniques as common methods)
have some limitations, as objective data is mostly cognitive and
a questionnaire’s subjective data provides the evaluator’s with
the user’s perception of his/her emotions and not actual state(s)
during the test.
Human emotions and affect are essential to understanding
users, as these can facilitate the development of persistence and
deep interest in a subject or goal. The analysis of this affective
dimension in empirical user-centered design (UCD) methods
helps us ensure that our users will be engaged and motivated
while using our systems. Therefore, analyzing and evaluating
emotional cues will provide practitioners with a third dimension
of analysis for collecting user data, supplementing typical and
common evaluation methods and resulting in a more accurate
understanding of the user’s experience.
Nowadays, there are few techniques and methodologies for
gathering affective data without asking the users what and how
they feel. We can give computers affective perceptual abilities
and measure physiological and behavioral signals such as body-
worn accelerometers, rubber and fabric electrodes, for example
[17]. We can also evaluate users’ eye gaze and collect electro-
physiologic signals, galvanic skin response (GSR),
electrocardiography (EKG), electroencephalography (EEG) and
electromyography (EMG) data, blood volume pulse, heart rate
or respiration and, more recently, facial recognition software.
As we have already seen, most of these methods have
limitations as they can be intrusive for the user, costly and most
require specific skills and additional evaluation time.
Our aim was to find a non-invasive, cross-cultural, cost-
efficient and easy to carry out method to help gain further
© Eva De Lera & Muriel Garreta-Domingo, 2007
Published by the British Computer Society
Volume 2 Proceedings of the 21st BCS HCI Group Conference
HCI 2007, Lancaster University, 3-7 September 2007
Devina Ramduny-Ellis & Dorothy Rachovides (Editors)
understanding about the affective state of a person during an
interface evaluation with users. This observational technique
does not replace the current and most common methods used
during a UCD process, but complements the objective and
subjective data gathered, therefore adding a third dimension to
the evaluation process.
Emotions are best treated as multifaceted phenomena consisting
of behavioral reactions, expressive reactions, physiological
reactions and subjective feelings [6]. However, to date, most
instruments measure one component at a time or a group of
specific components, such as facial recognition software, that
does not yet gather other body gestures or vocal data. An
exception is the AMUSE tool which helps practitioners to
conduct interface evaluations by collecting and aggregating
different sources of data including psychological and navigation
data [2]. This approach is similar to ours in the sense that mixes
different data from more than one source. In summary, our
study aims at providing an observation instrument that helps
account for affective events during common usability
evaluations, providing more data than the one obtained through
an intuitive and unstructured observation but without using a
more complex, expensive, and costly technique.
Focusing on the expressive reactions component, facial
expressions are central in the area of emotional research [1].
The first major scientific study of facial communication was
published by Charles Darwin in 1872 [5], who concluded that
many expressions and their meanings (e.g., for astonishment,
shame, fear, horror, pride, hatred, wrath, love, joy, guilt,
anxiety, shyness, and modesty) are universal. Other studies
indicate that the facial expressions of happiness, sadness, anger,
fear, surprise, disgust, and interest are universal across cultures
[7]. Therefore, using facial expressions as a tool to evaluate the
emotional dimension is a cross-cultural tool.
Several studies on emotions and human-computer interaction
are based on the analysis of facial expressions. Nevertheless,
most focus on the analysis of physiological data or facial
recognition, omitting other non-verbal communication aspects.
Hazlett [10] describes how facial EMG sensors were used to
detect facial emotional responses while the subjects performed
tasks on websites. Partala and Surakka [16] studied the effects
of affective interventions by recording facial EMG responses
from the muscle sites that control smiling and frowning. Branco
et al. [1] approach was closer to ours in the sense that it
complements the traditional methods of software usability
evaluation by monitoring users´ spontaneous facial expressions
as a method to identify the moment of occurrence of adverse
events. However, they too used EMG sensors to do the
monitoring. All of the above-mentioned methods are costly,
require specific skills and are time consuming, a limitation for
most usability and human computer interaction practitioners.
Our technique is based in observation and does not require
extra implementation effort since most interface evaluations are
conducted observing and recording the user as he or she
interacts with the interface. In such a scenario, facial and body
expressions are often observed and recorded, but generally not
measured in a structured manner.
Another approach used to evaluate emotion is the instrument
developed by Desmet [6]. PrEMO is a non-verbal self-report
instrument that measures 14 emotions that are often elicited by
product design. This tool requires respondents to report their
emotions with the use of expressive cartoon animations. This
method is closer to the self-assessment questionnaires at the end
of the user test than to the analysis of emotions we had
envisioned. The information gathered through this method is
still subjective and does not provide and accurate understanding
of the user’s emotions.
The ten emotion heuristics are based on theories that relate
expressive reactions to distinct emotions. The heuristics are
guidelines to help measure the affective state easily, cost-
effectively and cross-culturally. One of the theories in which
our work is based is the Facial Action Coding System (FACS)
[8], currently a well-known standard to systematically
categorize the physical expression of emotions. The Maximally
Discriminative Facial Moving Coding System (MAX) [12] is
another theory that links expression features to specific
Using these theories and other research as a starting point, we
selected a subset of features that allowed us to partially assess
the emotional reaction of the users as they interacted with an
application. These set of features were identified from previous
user evaluations and helped us build a list of the most common
expressions taking place during user evaluations. We correlated
the emotional cues identified with an emotional state and
ensuring that these could be easily identified and measured
during a user evaluation. A total of 10 emotional cues were
selected. Better than any body parts, our faces reveal emotions,
opinions, and moods. However, we use all of our body to
communicate nonverbally. Thus, our list of heuristics includes
some features that are not directly related to facial expressions.
Our study focused on recording one measure for the 10
emotional cues, as more work and experimentation would be
required to provide a specific measure for each cue. The goal of
this study was to provide a positive, neutral or negative value to
the overall user experience.
Again, it is important to note that this emotional data needs to
be analyzed in conjunction with the user evaluation’s other
objective measures (time, errors, etc.) as well as the subjective
measures gathered through the feedback questionnaire. In
conclusion, the ten heuristics focus on taking into account the
user’s instantaneous emotional reactions, while the performance
and navigation data provides objective data and, the a
posteriori self-assessment provides the user’s perception of
his/her emotional state.
The 10 heuristics help measure the affective dimension when
the product designers are looking for a neutral and relaxed
interaction with the application. Therefore, these do not apply
when evaluating a game or music website, for example. The
smile heuristic included in this method represents the goal of
the evaluation: to see a user with relaxed facials, therefore,
without experiencing negative reactions or frustration.
1. Frowning. Frowning can be a sign of a necessity to
concentrate, displeasure or of perceived lack of clarity.
Darwin [5] wrote about how frowning is one of the signs
of deep and “perplexed reflection”. In their study, Partala
and Surakka [16] found that the frowning activity
attenuated significantly after the positive interventions
than the no intervention condition.
2. Brow Raising. Brow raising should also be considered a
negative expressive reaction. To lift the arch of short hairs
above the eye is a sign of uncertainty, disbelief, surprise
and exasperation [9].
3. Gazing Away. The gazing away from the screen may be
perceived as a sign of deception. For example, looking
down tends to convey a defeated attitude but can also
reflect guilt, shame or submissiveness [9].
4. Smiling. A smile, or elevation of the cheeks, is a sign of
satisfaction. The user may have encountered an element
of joy during the evaluation process. Partala and Surakka
[16] found that smiling activity was significantly higher
during the positive condition.
5. Compressing the Lip. Seeing the user compress his or
her lips should be perceived as a sign of frustration and
confusion. Lip and jaw tension clearly reflects anxious
feelings, nervousness, and emotional concerns. [9]
6. Moving the Mouth. If the user is seen mouth gesturing
or speaking to himself / herself, this is associated with a
sign of being lost and of uncertainty.
7. Expressing Vocally. Vocal expressions such as sighs,
gasps, coughs, as well as the volume of the expression,
the tone or quality of the expression may be signs of
frustration or deception.
8. Hand Touching the Face. Elevating the hand that is
placed on the mouse to his / her face is a sign of
confusion and uncertainty, generally a sign of the user
being lost or tired.
9. Drawing Back on the Chair. The user may be
experiencing negative or refusing emotions. By drawing
back the chair, he / she may be showing a desire to get
away from the present situation.
10. Forward Leaning the Trunk. Leaning forward and
showing a sunken chest may be a sign of depression and
frustration with the task at hand. Like with the previous
heuristic, the user might be encountering difficulties but
instead of showing refusal, leaning forward is a sign of
attentiveness, of “getting closer”.
Our pilot study aimed at demonstrating the validity of the 10
heuristics as a one measure observational system to help
evaluate, besides the traditional data gathered in user testing, a
set of expressions or emotional cues that a user may
demonstrate while interacting with a system. The 10 heuristics
are easy to identify during a typical user evaluation and can be
quickly analyzed in conjunction with the other gathered data.
Each of these cues was assigned a positive, neutral and negative
value and they are primarily aimed at evaluating the negative or
frustrated emotional state [19]. For example, as mentioned,
frowning is related to obstacles while the movement of the
cheeks with pleasantness [18]. However, since the study did not
experiment with each heuristic individually, the result of our
evaluation is either that the user had a positive user experience,
a neutral or negative one. In the study, identifying five negative
heuristics provided a negative experience value that would later
be evaluated in conjunction with the other data gathered.
In order to begin evaluating our methodology, we conducted a
test with 8 participants. Four participants were assigned to carry
out some tasks at an intentionally frustrating online
supermarket, and 4 other participants were asked to carry out
the same exact tasks at a much less frustrating online
supermarket. This helped us identify whether the emotional
cues gathered were in fact related to the difficulty of the task
(frustration event) or another variable. Our study gathered the
objective data (time, errors and number of clicks) and the
values for the emotional cues we noted in the 10 emotional
heuristics guideline (positive, negative and neutral). At the end
of the test, users were also asked to fill in a feedback
questionnaire that included questions about the difficulty of the
accomplished tasks and their overall satisfaction.
Our pilot study had a total of eight participants aged from 28 to
47. Half were men while the other half were women. All
participants used computers on a daily basis and had not
previously conducted their home shopping online. Half of the
participants carried out the tasks in one supermarket while the
other half in another online supermarket. While this is an initial
study and further research and testing needs to be done to fully
validate this technique, the results showed that the emotional
cues identified always accompanied moments of errors or
difficulty and, most importantly, that sometimes the emotional
cues would come unrelated to a specific negative event
providing us with new information about the user experience
that we would not have collected if we were just gathering the
other common data. When this happened, we would evaluate
the event as negative, and when five of these events would
happen during the user evaluation we would give the evaluation
a negative measure for user experience.
For our usability laboratory, we used Morae software [15].
Using this software, we captured a video image mixing the PC
screen and the participants’ faces. Additionally, Morae saved all
clicks and keyboard actions in a file. Capturing the user’s video
and audio allowed us to review specific moments as needed,
and to involve other observers that could not be present at the
time of the evaluation.
The interface evaluation analysis considered the three
dimensions. The cognitive dimension was analyzed through the
number of clicks, the time needed to accomplish the task, and
the number of errors. The user’s emotional perception was
gathered through the feedback questionnaire and the affective
dimension was observed through the 10 emotional cues. Users’
expressions and comments were also noted to support the
evaluation results. The tests were observed by a
multidisciplinary team; one user experience director, one
psychologist and one graphic designer.
The observers’ analysis of both the interactions and
observations recorded concluded that the emotional cues,
together with the objective data, provided a more accurate
understanding of the user experience and level of satisfaction
than the questionnaires completed at the end of the test.
Participants that encountered errors and took longer time
showed signs of frustration (emotional cues) during the
evaluation but did not mention them in the users’ feedback
questionnaire. Often participants want to please the practitioner,
avoiding criticism, and they do not want to provide an overall
negative evaluation. Besides, they tend to think that the cause
was their lack of ability, instead of it being a design problem.
The analysis of these signals or emotional cues also showed
consistency of emotional state throughout all participants as
they all expressed frustration in similar ways. However, these
cues were harder to identify when the participants did not
display many facial or bodily expressions. Some participants
were more expressive than others but the emotion heuristics
allowed us to provide a positive, neutral or negative value for
their overall experience. Identifying several emotional cues
during the user evaluation helped us understand if the
participant had an overall emotionally positive or negative
In summary, and considering this as preliminary work that
requires further evaluation, measuring the affective dimension
with our observational system in conjunction with the other
data provided a better understanding of the user’s experience.
At the same time, it is important to note that not all emotions
can be identified through facial or bodily movements, so we
may not be able to evaluate all moments of frustration, anxiety,
or satisfaction, but the overall experience. Further research
should be conducted in evaluating emotions when these occur
without expression [4].
Our pilot study aims at providing a preliminary guideline to
help conduct a structured observation to evaluate the emotional
dimension during a user evaluation. The 10 emotion heuristics
provide researchers and practitioners with a set of guidelines
that can help them to begin incorporating the affective
dimension in their user evaluations.
This work has been partially supported by the Campus project
promoted by the Generalitat de Catalunya
( and by the Universitat Oberta de
Catalunya (
[1] Branco, P., Firth, P., Encarnao, L. M. & Bonato, P. Faces
of emotion in human-computer interaction. In Ext.
Abstracts CHI 2005 (Portland, OR, April 2-7,2005), ACM
Press, 1236-1239.
[2] Chateau, N. and Mersiol, M. AMUSE: A tool for
evaluating affective interfaces. Unpublished paper.
Retrieved on 15 February, 2007 from
[3] Chin, J.P., Diehl, V.A., & Norman, K. Development of an
instrument measuring user satisfaction of the human-
computer interface, in Proceedings of CHI 1988
(Washington DC, May 1988), ACM Press, 213-218.
[4] Dalgleish, T. and Power, M. (Eds.). Handbook of
Cognition and Emotion. John Wiley & Sons, Ltd., Sussex,
U.K., 1999.
[5] Darwin, C. The expression of the emotions in man and
animals. Oxford University Press, New York, NY,
[6] Desmet, P. M. A. Measuring emotions: Development of an
instrument to measure emotional responses to products. In
Blythe, M.A., Overbeeke, K., Monk, A.F. and Wright,
P.C. (Eds.), Funology: from usability to enjoyment.
Kluwer Academic Publishers. Dordrecht, Boston, London,
[7] Ekman, P. and Friesen, W. V. Constants across cultures in
the face and emotion. Journal of Personality and Social
Psychology, 17(2) (1971) 124-129.
[8] Ekman, P. and Friesen, W. V. Facial Action Coding
System: A technique for the measurement of facial
movement. Consulting Psychologists Press, Palo Alto, CA,
[9] Givens, D. B. The nonverbal dictionary of gestures, signs
and body language cues
[10] Hazlett, R. Measurement of User Frustration: A Biologic
Approach. Ext. Abstracts CHI 2003 (Florida, FL, April 5-
10, 2003), ACM. Press, 734-735.
[11] Ives, B., Olson, M. H., and Baroudi, J. J. The measurement
of user information satisfaction. Communications of the
ACM, 26 (1983) 785-793.
[12] Izard, C. E. The Maximally Discriminative Facial
Movement Coding System (MAX). Newark: Instructional
Recourses Centre, University of Delaware, Newark, DL,
[13] Mahlke, S. and Minge, M. Emotions and EMG measures
of facial muscles in interactive contexts. Unpublished
paper. Retrieved on 24 February, 2007 from
[14] Mandryk, R.L., Atkins, M.S. and Inkpen, K.M.. A
continuous and objective evaluation of emotional
experience with interactive play environments. In
Proceedings CHI 2006 (Montréal, Québec, Canada, April
22-27, 2006), ACM Press, 1027-1036.
[15] Morae Techsmith: Usability Testing for Software and
[16] PartalaT. and Surakka, V. The effects of affective
interventions in human-computer interaction. Interacting
with Computers, 16 (2004) 295-309.
[17] Picard, R. W., and Daily, S.B. Evaluating affective
interactions: Alternatives to asking what users feel.
Presented at CHI 2005 Workshop 'Evaluating Affective
Interfaces' (Portland, OR, April 2-7,2005).
[18] Pope, L. K. and Smith, C. A. On the distinct meanings of
smiles and frowns. Cognition and Emotion, 8 (1994) 65-
[19] Scherer, K., Wallbot, H.G. and Summerfield, A.
Experiencing Emotion. A cross-cultural study. Cambridge
University Press, Cambridge, MA, 1986.
[20] Spillers, F.: Emotion as a Cognitive Artifact and the
Design Implications for Products That are Perceived As
Pleasurable. Retrieved on 18 February, 2007 from
... They were observed to be excited during their interaction with the system app and many of them were unwilling to disengage from the app after the session. This attitude is known as frowning according to the ten emotion heuristics [57] which means that the respondents experienced perplexed and deep reflection on the application. This is positive feedback reflecting that the respondents do not experience any psychological reactance during their interaction with the app. ...
Full-text available
span>A persuasive agent makes use of persuasion attributions to ensure that its predefined objective(s) is achieved within its immediate environment. This is made possible based on the five unique features namely sociable, persuasive, autonomy, reactive, and proactive natures. However, there are limited successes recorded within the behavioural intervention and psychological reactance is responsible for these failures. Psychological reactance is the stage where rejection, negative response and frustration are felt by the users of the persuasive system. Thus, this study proposes a persuasive agent (PAT) architecture that limits the experience of psychological reactance to achieve an improved behavioural intervention. PAT architecture adopted the combination of the reactance model for behavior change and the persuasive design principle. The architecture is evaluated by conducting an experimental study using a user-centred approach. The evaluation reflected that there is a reduction in the number of users who experienced psychological reactance from 70 per cent to 3 per cent. The result is a better improvement compared with previous outcomes. The contribution made in this study would provide a design model and a steplike approach to software designers on how to limit the effect of psychological reactance on persuasive system applications and interventions.</span
... En otra investigación se ha desarrollado un conjunto de lineamientos para evaluar de forma eficiente, rápida y a un bajo costo el estado afectivo de los usuarios a través de la valoración de sus reacciones expresivas durante el proceso de evaluación de una interfaz. Complementa el análisis de los datos objetivos y cuantitativos obtenidos a través de las pruebas de usabilidad y de las respuestas subjetivas proporcionadas por los cuestionarios realizados tras las pruebas (Lera Garreta-domingo, 2007). ser económica, rápida y fácil de entender e incorporarlo a pruebas de evaluación de usabilidad tradicionales (Agarwal Meyer, 2009). ...
Full-text available
El nivel académico que pretende abordar el libro es en las diferentes licenciaturas e ingenierías que consideren tópicos en el área de interacción humano computadora, así como los posgrados relacionados con la temática. El libro describe diferentes tipos de interacción que pueden ofrecer las aplicaciones interactivas donde a través de la interfaz gráfica es posible guiar y hacer fácil la tarea del usuario. Así pues, el lector puede encontrar diferentes trabajos que especifican desde los requerimientos del usuario hasta el diseño, programación y evaluación de distintas aplicaciones interactivas.
Full-text available
Heuristic evaluation (HE) is widely used to evaluate various software, interfaces, systems, or application domains, but using generic heuristics will fail to detect specific usability issues. Researchers currently rely on traditional heuristics such as Nielsen’s, SMART, SMASH, or other domain-specific heuristics, which are unsuitable for a modern API system. Although several researchers have successfully developed heuristics specifically for modern systems such as web apps or smartphone apps, there is a lack of specific heuristics for Progressive Web Applications (PWA), a hybrid of native and web apps. Therefore, this paper reports the systematic development of specific heuristics for PWA using a standard eight-step methodology: Exploratory, Experimental, Descriptive, Correlation, Selection, Specification, Validation, and Refinement. The PWA heuristics called PWAUH developed in this study consist of 15 heuristics. We have validated our initial PWAUH with five experts from academia and industry, and the initial 14 heuristics are suitable for further expert validation. Subsequently, PWAUH was further validated using three different PWA domains, cultural heritage, stock photo industry, and marketplace, with sixteen experts ranging from academicians, UX researchers, R&D executives, and postgraduate students in HCI. An additional heuristic is included in the list in the refinement stage. Findings show that PWAUH is effective in identifying usability issues.
Analyzing users’ emotional aspects when interacting with computational solutions is a challenge for Computing professionals. In several situations, this kind of evaluation is the responsibility of the domain specialist. This study seeks to bring together different instruments for evaluating emotional responses in a framework named EmoFrame. It is possible to guide computer professionals in choosing the appropriate artifacts for their evaluations, depending on the solution developed and their use context. We developed a medium-fidelity prototype of the framework, and a first validation was carried out by Health and Computer specialists. It is also in the interest of this research to identify possible assessment protocols or instruments from other domains, which can be computerized with the support of these domain professionals and, later, become part of the EmoFrame.
This study aims to design a usability test enhanced with subjective usability evaluation methods (concurrent think-aloud, post-task and posttest questionnaires) and objective usability evaluation methods (eye-tracking, emotion recognition and logging technologies). Furthermore, the relationships between the subjective metrics such as task difficulty level, PSSUQ (Post-Study System Usability Questionnaire) scores and objective metrics such as task completion time, eye tracking metrics (fixation count, fixation duration, average fixation duration, saccade count, saccade duration, scanpath length, blink count), logging metrics (keying and back keying count) and emotion recognition metrics (negative emotions count) are aimed to be analyzed. Therefore, a user testing study with 38 participants was conducted to evaluate the usability level of the TV interface of Digiturk, which is one of the digital TV broadcasting platforms in Turkey. The participants completed ten tasks related to the features of the TV interface, such as VOD (video-on-demand) watching, channel locking, and recording for future watches. Whether there is a significant difference between tasks in terms of subjective and objective metrics is investigated by using one-way ANOVA. The results show that while task completion time and task difficulty increase, values of all objective metrics increase. The relationships between the subjective and objective metrics are measured with Pearson correlation and the results show that there is a significant relationship between every subjective and objective metric except PSSUQ scores, which shows the perceived satisfaction level of the participants. Furthermore, exploratory factor analysis is performed to explore the latent structure of the usability measures. As a result of the factor analysis, a two-factor structure in which objective and subjective metrics load on two separate factors is obtained.
The rise of digital health led to the accelerated emergence of health and wellness apps aimed to alter behaviours. Despite the prominence of these apps, there are no systematic approaches to evaluate their success. This paper introduces 13 heuristics, or guidelines, that facilitate the evaluation of digital solutions aimed to improve health and wellness outcomes. The present heuristic evaluation serves as a tool for teams to assess whether an app may be conducive to meaningful behaviour change. The heuristic evaluation is rooted in a framework centered around the intricate relationship between an app, user, and environment. We refer to it as the ARC framework. The evaluation within the framework is not intended to replace existing ones such as Nielsen and Molich’s usability heuristic evaluation, but to complement them. Specifically, the present evaluation does not concern itself with the user-friendliness of a product; it assesses whether an app is conducive to meaningful behaviour change. Needless to say, the app may be informed by cognitive neuroscience findings and aligned with behaviour change best practices; however, without the usability layer, users would not make it far enough to achieve their behavioural outcomes. The proposed heuristic evaluation fills a gap for evaluating whether a digital solution has the potential to lead to meaningful and long-lasting behaviour change.
Emotion is critical to human experience and is an integral part of perception, cognition, and behavior. While the mechanisms and precise measurement of emotion has been debated in the literature, researchers have theorized that emotion is related to activation across physiological, experiential, and behavioral response systems. This has important implications in human factors research and practice. Just as the cognitive task analysis was proposed to be used in tandem with hierarchical task analyses to account for cognitive components of a task, the current paper proposes a method to account for emotional components of a task. The emotion task analysis is proposed for the purpose of explicating emotion from an individual. Specifically, in order to account for emotional load and/or processes elicited during a task and to be used in tandem with previously established task analyses by usability/user experience researchers and human factors practitioners.
Video games are the most economically profitable entertainment industry. The nature of their design means that user experience is enriched by emotional, cultural, and other subjective factors that make design and / or evaluation difficult using traditional methods commonly used in interactive systems. It is therefore necessary to know how to apply Playability in order to design, analyze, optimize, and adapt it to a player’s preferences. In this chapter, the authors present a way to perform UX based on Playability techniques by adding hedonic factors that enrich the development of video games. The aim is to easily and cost-effectively analyze the different degrees of Playability within a game and determine how player experience is affected by different game elements. These results can be applied in the educational field where the experience of the pupils with educational video games is a crucial factor for the success of the learning process.
An important issue in the capture of the real user experience while interacting with technology is the ability to assess emotional quality. There are several methods for emotional quality evaluation in the literature. However, when the target users are deaf participants, communication problems arise. A substantial part of the deaf community communicates in sign language. Because user experience researchers are seldom fluent in sign language, they require the assistance of an interpreter whenever users are deaf signers. Evaluation of emotional quality may lose accuracy because of the mediation of an interpreter; consequently, emotional quality evaluation requires a special instrument that can be used in an intuitive and independent way by researchers and their deaf subjects. The authors present the process of creation and improvement of Emotion-Libras, an instrument for assessing the emotional quality of people with hearing disabilities when interacting with technology.
Conference Paper
Full-text available
In this position paper, we address three topics regarding the use of users' facial expression to study emotion in HCI: theories of emotion as basis for research, EMG measures of facial muscles to study users' emotions and application of emotion recognition methods in user experience studies.
Full-text available
Product design that provides aesthetic appeal, pleasure and satisfaction can greatly influence the success of a product. Traditional cognitive approaches to product usability have tended to underestimate or fragment emotion from an understanding of the user experience. Affect, which is inexplicable linked to attitudes, expectations and motivations, plays a significant role in the cognition of product interaction, and therefore can be usefully treated as a design aid. Emotion influences and mediates specific aspects of interaction before, during and after the use of a product. These affective states regularly impact how a user manipulates and explores a user interface in order to support a desired cognitive state. To better understand the specific qualities of user experience impacting desirability and pleasureability, it is necessary to understand how artifacts trigger and mediate affect and how these processes aid user cognition during interaction. The implications for design are that emotion acts as a critical component of artifact sense- making and determines how artifacts are interpreted (Rafaeli and Vilnai-Yavetz, 2003). Designers that understand how cognitive artifacts interchange with affective artifacts will be better able to support actual product use and perceived pleasure.
Full-text available
This study examines the information encoded by the eyebrow frown and the smile. In a conceptual replication of Smith (1989), subjects imagined themselves in pleasant and unpleasant scenarios while muscle activities in the eyebrow and cheek regions were monitored electromyographically. Brow region activity was related to evaluations of motivational incongruence and perceived goal-obstacles, and, after taking these relationships into account, was uncorrelated with subjective pleasantness. In contrast, cheek activity was associated with subjective pleasantness and, after taking this relationship into account, nothing else. These results contribute to the literature indicating that individual components of facial expressions directly encode information about emotional state and clarify the nature of the information encoded by two of these components.
Full-text available
The design of affective interfaces introduces new challenges for their assessment. New methods and tools have to be developed, taking into account the emotional dimension of Computer-Human Interactions. This paper presents the tool AMUSE that allows to conduct such evaluations by collecting and aggregating different sources of data including user's eye gaze and physiological data. Some results of an experiment on simulated Embodied Conversational Agents (ECAs) are briefly presented as an illustration of the new possibilities of investigation brought by AMUSE.
The present study investigated the psychophysiological effects of positive and negative affective interventions in human–computer interaction during and after the interventions. Eighteen subjects were exposed to pre-programmed mouse delays in an interactive problem-solving task. Following the mouse delays three types of conditions were used: positive or negative interventions given via speech synthesizer, and no intervention. Facial electromyographic responses were recorded from the zygomaticus major and corrugator supercilii muscle sites. These muscles control smiling and frowning, respectively. Smiling activity was significantly higher during the positive than the other conditions. It was also significantly higher after the positive interventions than the no intervention condition. The frowning activity attenuated significantly more after the positive interventions than the no intervention condition. Following the positive interventions the users' problem solving performance was significantly better than after no intervention. In all, the results suggest that both types of affective intervention had beneficial effects over ignoring the user. The results suggest further that positive intervention may be especially useful.