ArticlePDF Available

Measuring instant emotions based on facial expressions during computer-based assessment

Authors:
  • Independent Authority for Public Revenue (IAPR)
  • NetValue-Neuromarketing

Abstract and Figures

Emotions are very important during learning and assessment procedures. However, measuring emotions is a very demanding task. Several tools have been developed and used for this purpose. In this paper, the efficiency of the FaceReader during a computer-based assessment (CBA) was evaluated. Instant measurements of the FaceReader were compared with the researchers’ estimations regarding students’ emotions. The observations took place in a properly designed room in real time. Statistical analysis showed that there are some differences between FaceReader’s and researchers’ estimations regarding Disgusted and Angry emotions. Results showed that FaceReader is capable of measuring emotions with an efficacy of over 87% during a CBA and that it could be successfully integrated into a computer-aided learning system for the purpose of emotion recognition. Moreover, this study provides useful results for the emotional states of students during CBA and learning procedures. This is actually the first time that student’s instant emotions were measured during a CBA, based on their facial expressions. Results showed that most of the time students were experiencing Neutral, Angry, and Sad emotions. Furthermore, gender analysis highlights differences between genders’ instant emotions.
Content may be subject to copyright.
ORIGINAL ARTICLE
Measuring instant emotions based on facial expressions
during computer-based assessment
Vasileios Terzis Christos N. Moridis
Anastasios A. Economides
Received: 14 February 2011 / Accepted: 17 August 2011
ÓSpringer-Verlag London Limited 2011
Abstract Emotions are very important during learning
and assessment procedures. However, measuring emotions
is a very demanding task. Several tools have been devel-
oped and used for this purpose. In this paper, the efficiency
of the FaceReader during a computer-based assessment
(CBA) was evaluated. Instant measurements of the Face-
Reader were compared with the researchers’ estimations
regarding students’ emotions. The observations took place
in a properly designed room in real time. Statistical anal-
ysis showed that there are some differences between
FaceReader’s and researchers’ estimations regarding Dis-
gusted and Angry emotions. Results showed that Face-
Reader is capable of measuring emotions with an efficacy
of over 87% during a CBA and that it could be successfully
integrated into a computer-aided learning system for the
purpose of emotion recognition. Moreover, this study
provides useful results for the emotional states of students
during CBA and learning procedures. This is actually the
first time that student’s instant emotions were measured
during a CBA, based on their facial expressions. Results
showed that most of the time students were experiencing
Neutral, Angry, and Sad emotions. Furthermore, gender
analysis highlights differences between genders’ instant
emotions.
Keywords FaceReader e-Learning Computer-based
assessment Emotion recognition
1 Introduction
Measuring emotions could be crucial in fields as varied as
psychology, sociology, marketing, information technology,
and e-learning. Consequently, several researchers have
developed their own instruments to assess emotions [1].
The core channels/methods for measuring emotions are the
following [2]: (1) questionnaire, (2) personal preference
information, (3) speech recognition, (4) physiological data,
and (5) facial expressions. Although this paper evaluates
and uses facial expressions method, the following para-
graphs briefly highlight some main points of the afore-
mentioned emotion recognition methods.
Many researchers have used static methods such as
questionnaires and dialogue boxes, in order to infer a user’s
emotions. These methods are easy to administer but have
been criticized for being static and thus not able to
recognize changes in affective states. Moreover, Oatley
recognized that self-reporting of emotions simplifies the
recognition problem [3]. However, Dieterich, Malinowski,
Ku
¨hme, and Schneider-Hufschmidt stated that this
approach transfers one of the hardest problems in adaptive
affective interfaces from the computer to the user [4]. Thus,
another advantage of the questionnaire is that it provides
feedback from the user’s point of view and not an out-
sider’s [1]. Questionnaires can be used to infer users’
emotions, either stand-alone or assisting another affect
recognition method. On the other hand, the way questions
are framed and demonstrated [5], the order in which
questions are asked and the terminology employed in
questions are all known to affect the subject’s responses
V. Terzis (&)C. N. Moridis A. A. Economides
Information Systems Department, University of Macedonia,
Egnatia Street 156, Thessaloniki 54006, Hellas, Greece
e-mail: bterzis@otenet.gr
C. N. Moridis
e-mail: papaphilips@gmail.com
A. A. Economides
e-mail: economid@uom.gr
123
Pers Ubiquit Comput
DOI 10.1007/s00779-011-0477-y
[6,7]. Similarly, there is evidence that judgments on rating
scales are non-linear and that subjects hesitate to use the
extreme ends of a rating scale [8]. Hence, when using
verbal scales, one should make sure that the terminology
employed and the context in which it is to be presented,
really reflect the subjective significance of the subject
population [9].
Emotional recognition frameworks using personal pref-
erence information are based on the assumption that people
do not necessarily recognize emotions just by signals seen
or heard; they also use a high level of knowledge and
reason, to be able to process the goals, situations, and
preferences of the user. A person’s emotions could be
predictable if their goals and perception of relevant events
were known [10]. Implemented in a computational model,
this can be achieved by using agents, artificial intelligence
techniques, reasoning on goals, situations, and preferences
[11]. For example, if the system can reason about the
reactions of a user from the input that the system receives,
(assumption made derived from the time of day, speed of
reading, provided personal information, etc.) appropriate
content could be displayed in a way adapted for the emo-
tion or the mood of the user.
The modulation of voice intonation is one (of the) main
channel(s) of human emotional expression [12]. Certain
emotional states, such as anger, fear, or joy, may produce
physiological reactions [13], such as an increase in cardiac
vibrations and more rapid breathing. These in turn have
quite mechanical and thus predictable effects on speech,
particularly on pitch (fundamental frequency F0), timing,
and voice quality [14]. Some researchers have investigated
the existence of reliable acoustic correlates of emotion in
the acoustic characteristics of the signal [12,15]. Their
results agree on the speech correlates that are derived from
physiological constraints and correspond with broad clas-
ses of basic emotions, but disagree and are unclear con-
cerning the differences between the acoustic correlates of
fear and surprise or boredom and sadness. This is perhaps
explained by the fact that fear produces similar physio-
logical reactions to surprise, and boredom produces similar
physiological reactions to sadness, and consequently, very
similar physiological correlates result in very similar
acoustic correlates [14]. The task of machine recognition of
basic emotions in non-formal everyday speech is extremely
challenging.
Another valuable channel for emotional detection
derives from the measurement of physiological quantities,
such as temperature or blood pressure. This is important
not only for the study of physiological processes and the
clinical diagnostics of various diseases, but also for the
estimation of emotional states. William James was the first
who proposed that patterns of physiological response could
be used to recognize emotion [16]. Psychologists have been
using physiological measures as identifiers of human
emotions such as anger, grief, and sadness [17]. Usually,
changes in emotional state are associated with physiolog-
ical responses such as changes in heart rate, respiration,
temperature, and perspiration [18]. The use of engineering
techniques and computers in physiological instrumentation
and data analysis is a new, challenging research practice,
especially when referring to emotional recognition. For
instance, researchers at the MIT Media laboratory have
been using sensors that detect galvanic skin response
(GSR), blood volume pulse, respiration rate, and electr-
omyographical activity of muscles [19]. The emotion
mouse, an example of recent advances in affective com-
puting, measures the user’s skin temperature, galvanic skin
response (GSR), and heart rate and uses this data to cate-
gorize the user’s emotional state [20]. It has also been
suggested that facial electromyography (EMG) could be
potentially useful input signals in HCI [21,22]. Therefore,
there is a need for adequate measures to associate physi-
ological measurements with definite emotional states in
order to assign them to conditions meaningful to a com-
puter [23]. Since the physiological state is so closely
associated with the affective state, an accurate model of a
physiological response could enable computer interactive
environments to effectively determine a user’s affective
state in order to guide appropriate customized interactions
[24]. Nevertheless, subjective and physiological measures
do not always agree, which indicate that physiological data
may detect responses that users are either unconscious of or
cannot recall at post-session subjective assessment [25].
Moreover, the sensors might often fail and result in missing
or unfavorable data, a common problem in many multi-
modal scenarios, resulting in a considerable reduction in
the performance of the pattern recognition system [26].
Research evidence supports the existence of a number of
universally recognized facial expressions for emotion such
as happiness, surprise, fear, sadness, anger, and disgust
[27]. Therefore, estimating emotional experiences from
objectively measured facial expressions has become an
important research topic. Other facial recognition systems
employ advanced video-based techniques [28] or measure
the electrical activity of muscles with EMG (facial elec-
tromyography) [21].
An important issue is that many of the existing facial
recognition systems rely on analyzing single facial images
instead of tracking the changes in facial expressions con-
tinuously [29]. It would be more meaningful if the com-
puterized learning environments could analyze the
student’s facial expressions continuously to be able to react
to changes in the student’s emotional state at the right time.
Relative to this, Essa and Pentland made the point that the
lack of temporal information is a significant limitation in
many facial expression recognition systems. Consequently,
Pers Ubiquit Comput
123
methods for analyzing facial expressions in human–com-
puter interaction, especially those concerning computer-
aided learning systems, should incorporate a real-time
analysis [28]. This can be achieved either by using
advanced video-based techniques [28] or by measuring the
electrical activity of muscles with EMG (facial electro-
myography) [21].
At present, different machine vision techniques using
video cameras are the predominant methods in measuring
facial expressions [3032]. A notable application is the
FaceReader, lately developed by Vicar Vision and Noldus
Information Technology bv. The FaceReader recognizes
facial expressions by distinguishing six basic emotions
(happy, angry, sad, surprised, scared, disgusted, and neu-
tral) with an accuracy of 89% [33]. The system is based on
Ekman and Friesen’s theory of the Facial Action Coding
System (FACS) that states that basic emotions correspond
with facial models [34]. Several studies have used Face-
Reader for different purposes [35,36].
With regard to learning, there have been very few
approaches for the purpose of affect recognition. A real-
time analysis should be incorporated in human–computer
interaction [2], especially concerning computer-aided
learning systems. Previous studies in different fields
showed that FaceReader is a reliable measuring tool [35,
36]. However, learning and self-assessment are procedures
with particular characteristics.
This paper evaluated the effectiveness of FaceReader
2.0 during a computer-based assessment (CBA). Accord-
ingly, FaceReader’s efficiency was measured in compari-
son with 2 experts’ observations. Moreover, the
proportions of seven basic students’ emotions were esti-
mated during the CBA and were also compared between
genders.
2 Methodology
The course was an introductory informatics course, in the
Department of Economic Sciences of a Greek University.
The course contains theory and practice. In the theoretical
module, students have to learn general concepts of Infor-
mation and Communication Technology (ICT). In the
practical module, students have to learn how to use Word
Processing and Internet. Computer-based assessment
(CBA) includes questions from both modules.
208 students enrolled to participate in computer-based
assessment. The next step was the arrangement of the
appointments. Finally, 172 applicants out of the 208
attended their appointments. There were 60 males (35%)
and 112 females (65%). The average age of students was
18.4 (SD =1.01). The CBA was voluntary. CBA consists
of 45 multiple choice questions, and its duration was
45 min. Each question had 4 possible answers. The
sequence of questions was randomized.
The use of the CBA was very simple. Each student had
to choose the right answer, and then, he/she had to push the
‘next’’ button. Each page included the question, the 4
possible answers, and the ‘‘next’’ button. The text was in
Greek. Teachers did not offer any additional instruction in
the beginning. Only a few students, who were not very
comfortable with the use of the assessment and asked for
help with its use, received further information and
instructions. The CBA’s appearance was simple, too, in
order to avoid any effects of design and esthetics.
During the evaluation stage of a system, the effects of
human–computer interaction (HCI) are often examined by
what is called the ‘‘wizard of oz mode,’’ where a researcher
hidden behind a curtain controls the system and makes
observations [37]. Accordingly, each student took the test
alone in a properly designed room. The room had two
spaces. There was a bulkhead between the two spaces. At
the first space, there was the PC on which the CBA took
place. Moreover, the camera of the FaceReader was hidden
in a bookcase. Besides, it is well known that people express
themselves more freely when they feel that they are on
their own.
In the second space were the 2 researchers. FaceReader
was connected with another PC in that space, so the
researchers were able to watch the facial expressions and
the emotions of the participants in real time. The two
researchers were also able to observe student’s actions
during the test through VNC viewer software, which was
presenting the student’s screen on a separate window of the
researchers’ screen (Fig. 1). Each researcher recorded the
student’s emotions measured by the FaceReader and his/
her estimation regarding the student’s emotions at the same
time, based on student’s facial expressions and actions.
In a live analysis, FaceReader’s output is a number of
charts and files. Each emotion is expressed as a value
between 0 and 1, indicating the intensity of the emotion.
Fig. 1 Researchers’ screen: FaceReader and VNC viewer (student’s
screen)
Pers Ubiquit Comput
123
‘0’’ means that the emotion is not visible in the facial
expression, and ‘‘1’’ means that the emotion is fully pres-
ent. Only emotions of value C0.5 were evaluated by the
researchers. Changes at FaceReader’s measurements in
relation to student’s facial expression or/and actions
(observed by the researchers during the test) determined
whether a FaceReader measurement was confirmed or not.
The purpose of this study has two dimensions in the
context of CBA: The first is the examination of Face-
Reader’s efficiency in measuring students’ instant emo-
tions, and the second is to provide empirical data
concerning students’ instant emotions.
3 Results
Firstly, it had to be examined whether the 2 researchers’
estimations were statistically different. It was important to
show that these estimations were free from researchers’
opinions. This means that any researcher will have a good
chance to show the same results if the experiment was
repeated. Thus, a contingency table was created for each
emotional state and overall. The 2 groups were the 2
researchers, and the outcomes were the agreement and the
disagreement with the FaceReader (Table 1). The
difference between the 2 researchers is not considered to be
statistically significant in each emotional state and overall.
Secondly, for the 172 students, 7,416 different emo-
tional states were recorded by the FaceReader. Table 2
shows the results for each emotional state. The second
column shows confirmed records. Confirmed records are
FaceReader’s records that they are also confirmed by the
researchers. In contrast, the third column shows all the
records (Confirmed records ?Not Confirmed records) of
the FaceReader during CBA. Researchers and FaceReader
had almost the same opinion regarding Neutral (99%) and
Happy (90%) emotions. Moreover, researchers and Face-
Reader had high agreement for Scared (87%), Surprise
(82%), and Sad (79%) emotions. However, the agreement
results were lower regarding Disgusted (70%) and Angry
(71%) emotions. Nevertheless, there was a high agreement
overall between the emotions measured by the FaceReader
and the researchers’ opinions (87%).
Moreover, Table 3shows the agreement between
researchers and FaceReader on the emotional states
observed in each gender. Thus, the fourth column of
Table 3presents the proportion of confirmed instances to
total (confirmed and not confirmed by the researchers and
FaceReader’s records) FaceReader records for each emo-
tion in each gender. Therefore, the null hypothesis was that
Table 1 Contingency table Emotion Researcher 1 Researcher 2 Total Chi square pvalue
Disgusted
Agreement 170 125 295 1.03 0.31
Disagreement 80 46 126
Surprised
Agreement 130 85 215 0.29 0.59
Disagreement 31 16 47
Neutral
Agreement 1,985 1,576 3,561 1.29 0.26
Disagreement 30 16 46
Happy
Agreement 160 103 263 3.06 0.08
Disagreement 23 6 29
Angry
Agreement 694 631 1,325 2.73 0.1
Disagreement 309 236 545
Scared
Agreement 110 85 195 0.34 0.56
Disagreement 18 10 28
Sad
Agreement 281 305 586 0.28 0.59
Disagreement 70 85 155
Total
Agreement 3,530 2,910 6,440 2.33 0.12
Disagreement 561 415 976
Pers Ubiquit Comput
123
the proportions of confirmed instances to total FaceReader
records for each emotion would not be statistically differ-
ent in each gender. The results of the Ztest are presented at
columns 5 and 6 of Table 3. For Neutral, Happy, and
Angry emotions, FaceReader showed almost the same
results in both genders. Scared emotion was recognized
better by FaceReader regarding males than females with
statistically significant difference. Finally, Sad emotion
was recognized better by FaceReader regarding females
than males, also with statistically significant difference.
Thus, gender differences, concerning FaceReader perfor-
mance, were observed in 2 out of 7 emotional states.
Table 4demonstrates the confirmed (column 2) and
total (column 3) proportions of each instant emotion
records out of overall records during CBA. Ztest was also
used to compare the proportions of the two groups deter-
mining whether they are significantly different from one
another. It was expected that Neutral would be the instant
emotion with the higher proportion. During the CBA,
students’ facial expressions stayed calm. Students changed
their facial expressions instantly only if they read questions
or answers that provoked them negative or positive emo-
tions. However, the percentage of Neutral’s appearances in
the overall emotions, observed by the FaceReader alone,
was less (48%) than the percentage of confirmed Neutral
appearances (55%) in the overall confirmed emotions
(observed by FaceRader and confirmed by the researchers).
The co-appearance, in FaceReader’s observations, of
Neutral with other emotions such as Angry and Disgusted
increased the total records and thus decreased the Neutral’s
percentage. For cases, such as this, the researchers agreed
most of the times only on the Neutral observation.
On the other hand, the percentage of confirmed Dis-
gusted and Angry emotions in the overall confirmed
Table 2 FaceReader and
researchers’ agreement on
various emotional states
Emotion Confirmed records:
FaceReader and
researchers’ agreement
Total records: confirmed
and not confirmed
FaceReader’s records
Percentage of
confirmed/total
records (%)
Disgusted 295 421 70
Surprised 215 262 82
Neutral 3,561 3,607 99
Happy 263 292 90
Angry 1,325 1,870 71
Scared 195 223 87
Sad 586 741 79
Total 6,440 7,416 87
Table 3 FaceReader and
researchers’ agreement on
various emotional states
observed regarding each gender
Emotion Confirmed records:
FaceReader and
researchers’
agreement
Total records:
confirmed
and not confirmed
FaceReader’s records
Percentage of
confirmed/total
records
(%)
Ztest Significant
difference
Disgusted male 131 198 66 1.544 No
Disgusted female 164 223 73
Surprised male 82 93 88 1.743 No
Surprised female 133 169 78
Neutral male 1,196 1,205 99 1.837 No
Neutral female 2,365 2,402 98
Happy male 68 73 93 0.791 No
Happy female 195 219 89
Angry male 563 779 72 1.088 No
Angry female 762 1,091 70
Scared male 62 63 98 2.876 Yes
Scared female 133 160 83
Sad male 200 272 74 2.736 Yes
Sad female 386 469 82
Total male 2,302 2,683 86 1.959 No
Total female 4,138 4,733 87
Pers Ubiquit Comput
123
observations was lower than it was for the overall obser-
vations of FaceReader alone. However, Surprised, Happy,
Scared, and Sad were not statistically different. This indi-
cates that FaceReader’s and researchers’ observations
agreed concerning these emotions during the CBA. The
results also showed that ‘‘negative’’ emotions (Angry, Sad,
and Disgusted) appeared more often than positive emotions
such as Happy.
Table 5demonstrates the confirmed (column 2) and
total (column 3) percentages of instant emotions for each
gender. Neutral and Angry were also statistically different
for both genders. However, Disgusted was statistically
different only for males. This indicates that there was an
agreement between the FaceReader and researchers’
observations concerning females’ emotions of Disgusted.
Thus, concerning Happy, Scared, Surprised, and Sad
emotions, FaceReader’s and researchers’ observations were
statistically indistinguishable in both genders.
Moreover, we compared the confirmed percentages of
the two genders for each emotion records out of overall
records during CBA. Table 6shows whether the differ-
ences between the two genders are statistically significant.
Results indicated that males were more Disgusted and
Angry than females. On the other hand, females showed
significantly more times Neutral and Happy facial expres-
sions than males. Surprised, Scared, and Sad had no sig-
nificant difference between the two genders regarding
confirmed records.
4 Discussion
Measuring instant emotions by using facial expressions is a
well-known method. However, this knowledge and tech-
nology have not been yet extensively used in learning
environments. The aim of this study was firstly to examine
the effectiveness of the FaceReader during a computer-based
assessment. In parallel, we demonstrated the instant emo-
tions’ percentages that came up during the CBA. In other
words, we presented how the students felt instantly while
taking the CBA. Furthermore, we extended our analysis to
genders in order to highlight differences between them.
Results showed that FaceReader is capable of measuring
emotions with an efficacy of over 87% during CBA (Fig. 2)
Table 4 Confirmed and total
records percentages for each
emotion records out of overall
records during CBA
Emotion Confirmed records:
FaceReader and
researchers’ agreement (%)
Total records: confirmed
and not confirmed
FaceReader’s records (%)
Ztest Significant
difference
Disgusted 4.58 5.68 2.879 Yes
Surprised 3.34 3.53 0.565 No
Neutral 55.30 48.64 7.808 Yes
Happy 4.08 3.94 0.376 No
Angry 20.57 25.22 6.461 Yes
Scared 3.03 3.01 0.019 No
Sad 9.10 9.99 1.747 No
Table 5 Confirmed and total
records’ percentages for each
emotion records out of overall
records during CBA in each
gender
Emotion Confirmed records:
FaceReader and
researchers’ agreement (%)
Total records: confirmed
and not confirmed
FaceReader’s records (%)
Ztest Significant
difference
Disgusted male 5.69 7.38 2.339 Yes
Disgusted female 3.96 4.71 1.673 No
Surprised male 3.56 3.47 0.095 No
Surprised female 3.21 3.57 0.874 No
Neutral male 51.95 44.91 4.931 Yes
Neutral female 57.15 50.75 6.01 Yes
Happy male 2.95 2.72 0.403 No
Happy female 4.71 4.63 0.128 No
Angry male 24.46 29.03 3.595 Yes
Angry female 18.41 23.05 5.337 Yes
Scared male 2.69 2.35 0.675 No
Scared female 3.21 3.38 0.387 No
Sad male 8.69 10.14 1.695 No
Sad female 9.33 9.91 0.887 No
Pers Ubiquit Comput
123
and that it could be successfully integrated into a computer-
aided learning system for the purpose of emotion recogni-
tion. Specifically, FaceReader successfully recognized
Surprised, Happy, Scared, and Sad emotions (Fig. 2).
FaceReader was also successful for Neutral (Fig. 2).
Moreover, results indicated that FaceReader did not
have significant differences regarding emotion recognition
between genders, except for Sad and Scared emotions
(Fig. 3). For Sad, FaceReader was more successful for
females. For males, FaceReader was more effective for
Scared.
Our analysis showed limitations concerning the dis-
tinction between Neutral, Angry, and Disgusted for males
during CBA. Practitioners and researchers could improve
the effectiveness of emotion face recognition methods to be
more effective in distinguishing between Neutral, Angry,
and Disgusted in the context of CBA. Specifically, Figs. 4,
5, and 6show examples of FaceReader’s limitations during
CBA. As we discussed earlier, most of the times Face-
Reader measured simultaneously Angry and Disgusted, the
researchers agreed only with the presence of an Angry
emotion (Fig. 4). Some movements of jaw, mouth, and
nose may have interfered with the FaceReader’s accuracy.
Additionally, many times FaceReader measured an
Angry emotion simultaneously with a Neutral one, but
Neutral was the only emotion confirmed by the researchers
(Fig. 5). This particular disagreement was expected. When
participants read the questions, many of them had clouded
Table 6 Statistical significance
of the differences between the
confirmed percentages for each
emotion records out of overall
confirmed records during CBA
in each gender
Emotion Male (%) Female (%) Ztest Significant difference
Disgusted 5.69 3.96 3.12 Yes
Surprised 3.56 3.21 0.677 No
Neutral 51.95 57.15 3.996 Yes
Happy 2.95 4.71 3.354 Yes
Angry 24.46 18.41 5.724 Yes
Scared 2.69 3.21 1.091 No
Sad 8.69 9.33 0.811 No
Percentage Of Confirmed / Total Records
70%
82%
99%
90%
71%
87%
79%
87%
0%
20%
40%
60%
80%
100%
Disgusted
Surprised
Neutral
Happy
Angry
Scared
Sad
Total
Fig. 2 FaceReader and researchers’ agreement on various emotional
states
Percentage of Confirmed to total FaceReader records for each
gender
72%
93%
99%
88%
66%
74%
98%
73%
98%
89%
70%
83%
78%
82%
Disgusted
Surprised
Neutral
HappyAngry
Scared*
Sad*
male
female
Fig. 3 FaceReader and researchers’ agreement on various emotional
states observed regarding each gender. *Emotions with significant
differences regarding emotion recognition between genders
Fig. 4 Angry and Disgusted emotions co-appearance
Pers Ubiquit Comput
123
brow. People are taking this facial expression when reading
something with great concentration. Zaman and Shrimpto-
Smith came up to the same result [1]. This may be the
reason for FaceReader measuring, so frequently, an Angry
emotion at the same time with a Neutral one.
Moreover, FaceReader faced limitations with partici-
pants that wore glasses or had piercing. Other problems
were caused by special characteristics of some persons like
big noses, bushy brows, small eyes, or chins. Another
difficulty was fringes reaching down to eyebrows (Fig. 6).
However, these limitations are being confronted.
Researchers currently classify features that are located
outside the modeled area of the face (e.g. hair) or features
that are poorly modeled, such as wrinkles, tattoos, piercing,
and birthmarks. Moreover, person identification will be
added to the system [33].
Our analysis also included the measurements of the
different instant emotions that appeared during the CBA.
Neutral was the most dominant of confirmed instant emo-
tions with 55% (Fig. 7). As we said earlier, most of the
time students’ facial expressions stayed calm and they were
changing their facial expressions only if they read some-
thing that changed their emotions, such as a very difficult
or a very easy question. Besides Neutral, the appearance of
confirmed Angry was also very large with 20% (Fig. 7).
This is a very crucial result. Angry is a negative emotion
that could disorganize student’s effectiveness during a self-
assessment or a learning procedure [38]. Another negative
confirmed instant emotion with large percentage during the
test was Sad (9.1%). Similarly, Sad could have negative
effects on student’s attention and motivation [39]. Dis-
gusted (4.6%) and Scared (3%) are other two negative
confirmed emotions that were not observed extensively
(Fig. 7). However, their measurement is also important
because if practitioners and researchers wish to manage
student’s instant emotions, they also have to take into
account Disgusted and Scared [40]. During CBA, Dis-
gusted and Scared are two negative emotions that can have
an influence on student’s emotional experience. Scared and
Disgusted were observed most of the times after a big
series of wrong answers. On the other hand, confirmed
Happy (4%) had also a small percentage during the CBA
(Fig. 7). This result may be justified, since a test is an
anxiety provoking procedure. Happy was observed when
students answered correctly a difficult question or during
the last questions if they felt that they had already reached
a good score.
Moreover, gender analysis revealed some useful results
(Fig. 8). Surprised, Scared, and Sad had no significant dif-
ference between genders. Males presented significant larger
percentages for Disgusted and Angry. This may indicate
that males lose easier their temper and concentration. On
Fig. 5 Angry and Neutral emotions co-appearance
Fig. 6 Modeling failed
4,58% 3,34%
55,30%
4,08%
20,57%
3,03%
9,10%
Disgusted
Surprised
Neutral
Happy
Angry
Scared
Sad
Fig. 7 Confirmed records percentages for each emotion records out
of overall confirmed records during CBA
Pers Ubiquit Comput
123
the other side, females appeared to experience more Neutral
and Happy emotions.
When the effect of negative emotions (such as Sad, Fear,
or angry) is too intense, the student’s performance can be
seriously impaired. Frequent errors could create the
expectation of more errors, thus increasing negative emo-
tions, and leading to even more wrong answers until the
student’s performance collapses [41]. Positive emotions
may also occasionally necessitate instruction. For instance,
providing the correct answer to a hard question could
induce positive emotions such as joy and enthusiasm, but
also lead to loss of concentration if too much consideration
is given to the elicited emotions.
Although fear was not often observed in this study, it is
still an emotion that can have a detrimental effect on stu-
dents’ performance during a test [42,43]. Neither was happy
often observed, but positive emotions may also occasionally
necessitate instruction. For instance, positive emotions can
lead students to focus on the excitement and undervalue the
effort required to achieve a successful result [44,45]. On the
other hand, Angry and Sad emotions were observed often
enough in this study to be emotions ‘‘calling for feedback.’’
Regarding emotional feedback, Economides proposed
an emotional feedback framework, taking as field of
application the CAT (Computer Adaptive Testing) sys-
tems, in order to manage emotions [44,46]. The emotional
feedback can occur before and after the test, during the test,
and before and after a student’s answer to a question [46,
47]. In all these cases, emotional feedback can be provided
either automatically according to the student’s emotional
state, either upon the student’s or the teacher’s request.
Humor and jokes, amusing games, expressions of sympa-
thy, reward, pleasant surprises, encouragement, accep-
tance, praises but also criticism are some of the possible
actions that could be practiced by a testing system [44].
Finally, gender analysis revealed that females exhibited
significantly higher percentages for Neutral and Happy
emotions. On the other hand, males appeared to experience
more Disgusted and Angry emotions. Therefore, the results
of this study indicate that gender differences should be
seriously taken into account when designing emotional
feedback strategies for computerized tests.
5 Conclusions
An instrument like FaceReader is very crucial for the
amelioration of computer-aided learning systems. Educa-
tors will have the opportunity to better recognize how their
students are feeling during the learning procedures and
they will also be able to give better and more effective
emotional feedback in learning, self-assessment, or CAT
(Computer Adaptive Testing) systems [41].
To our best knowledge, this is the first study that eval-
uated an emotional facial recognition instrument during
CBA. Our analysis indicates some useful results. Firstly,
FaceReader is efficient in measuring emotions with over
87% during CBA. Specifically, FaceReader successfully
recognized Neutral, Surprised, Happy, Scared, and Sad
emotions and it faces some limitations with Angry and
Disgusted.Moreover, our research indicates that Face-
Reader did not have significant differences regarding
emotion recognition between genders, except for Sad, in
which it was more successful for females and for Scared, in
which it was more effective for males.
Besides the evaluation of FaceReader, this study provides
empirical data for the emotional states of students during
computer-based assessments and learning procedures. Our
analysis shows that Neutral (55%) was the dominant instant
emotion, followed by Angry (20%) and Sad (9%). Students
also experienced the other four instant emotions, that Face-
Reader is able to measure, at lower percentages such as
Disgusted with 4.5%, Happy with 4%, Surprised with 3.3%,
and Scared with 3%. Finally, gender analysis revealed that
females presented significantly larger percentages for Neu-
tral and Happy. On the other side, males appeared to expe-
rience more Disgusted and Angry emotions.
To conclude, our study provides useful and important
results regarding the effectiveness of FaceReader and the
students’ instant emotions during CBA. These results could
be useful for tutors, researchers, and practitioners.
References
1. Zaman B, Shrimpton-Smith T (2006) The FaceReader: measuring
instant fun of use. In: Proceedings of the fourth Nordic
2,95%
24,46%
8,69%
2,69%
51,95%
3,56%5,69%
18,41%
9,33%
3,21%
3,21%
3,96%
57,15%
4,71%
0%
10%
20%
30%
40%
50%
60%
70%
Disgusted*
Surprised
Neutral*
Happy*
Angry*
Scared
Sad
Male
Female
Fig. 8 Confirmed records for each emotion out of overall confirmed
records during CBA for each gender. *Emotions with significant
differences regarding confirmed records percentages between genders
Pers Ubiquit Comput
123
conference on human-computer interaction. ACM Press, Oslo,
pp 457–460
2. Moridis C, Economides AA (2008) Towards computer-aided
affective learning systems: a literature review. J Educ Comput
Res 39(4):313–337
3. Oatley K (2004) The bug in the salad: the uses of emotions in
computer interfaces. Interact Comput 16(4):693–696
4. Dieterich H, Malinowski U, Ku
¨hme T, Schneider-Hufschmidt M
(1993) State of the art in adaptive user interfaces. In: Schneider-
Hufschmidt M, Ku
¨hme T, Malinowski U (eds) Adaptive user inter-
faces: principles and practice. Elsevier, North Holland, pp 13–48
5. Lindgaard G, Triggs TJ (1990) Can artificial intelligence out-
perform real people? The potential of computerised decision aids
in medical diagnosis. In: Karwowski W, Genaidy A, Asfour SS
(eds) Computer aided design: applications in ergonomics and
safety. Taylor & Francis, London, pp 416–422
6. Lindgaard G (1995) Human performance in fault diagnosis: can
expert systems help. Interact Comput 7(3):254–272
7. Anderson NH (1982) Methods of information integration theory.
Academic Press, London
8. Slovic P, Lichtenstein S (1971) Comparison of Bayesian and
regression approaches to the study of information processing in
judgment. Organ Behav Human Perform 6:649–744
9. Lindgaard G (2004) Adventurers versus nit-pickers on affective
computing. Interact Comput 16(4):723–728
10. Ortony A, Clore GL, Collins A (1988) The cognitive structure of
emotions. Cambridge University Press, Cambridge, UK
11. Conati C (2002) Probabilistic assessment of user’s emotions in
education games. J Appl Artif Intell 16(7–8):555–575 (special
issue on managing cognition and Affect in HCI)
12. Banse R, Sherer KR (1996) Acoustic profiles in vocal emotion
expression. J Pers Soc Psychol 70(3):614–636
13. Picard R (1997) Affective computing. MIT Press, Cambridge,
MA
14. Oudeyer P-Y (2003) The production and recognition of emotions
in speech: features and algorithms. Int J Human Comput Stud
59(1–2):157–183
15. Burkhardt F, Sendlmeier W (2000). Verification of acoustical
correlates of emotional speech using formant-synthesis. In: Pro-
ceedings of the ISCA workshop on speech and emotion. Belfast,
Northern Ireland
16. James W (1983) What is an emotion? In: James W (ed) Essays in
psychology. Harvard University Press, Cambridge, pp 168–187
(Reprinted from Mind, 1884, 9:188–205)
17. Ekman P, Levenson RW, Friesen WV (1983) Autonomic nervous
system activity distinguishes among emotions. Science
221:1208–1210
18. Frijda N (1986) The emotions. Cambridge University Press,
Cambridge, UK
19. Picard R (1998) Toward agents that recognize emotion. In: Pro-
ceedings of IMAGINA. Monaco, pp 153–165
20. Ark W, Dryer D, Lu D (1999) The emotion mouse. In: Bullinger
HJ, Ziegler J (eds) Human–computer interaction: ergonomics and
user interfaces. Lawrence Erlbaum, London, pp 818–823
21. Partala T, Surakka V (2004) The effects of affective interventions
in human-computer interaction. Interact Comput 16(2):295–309
22. Partala T, Surakka V (2003) Pupil size as an indication of
affective processing. Int J Human Comput Stud 59(1–2):185–198
23. Bamidis PD, Papadelis C, Kourtidou-Papadeli C, Vivas A (2004)
Affective computing in the era of contemporary neurophysiology
and health informatics. Interact Comput 16(4):715–721
24. McQuiggan SW, Lee S, Lester JC (2006) Predicting user physi-
ological response for interactive environments: an inductive
approach. In: Proceedings of the second conference on artificial
intelligence and interactive entertainment. Marina del Rey,
pp 60–65
25. Wilson GM, Sasse MA (2004) From doing to being: getting
closer to the user experience. Interact Comput 16(4):697–705
26. Kapoor A, Picard RW (2005) Multimodal affect recognition in
learning environments. In: Proceedings of the 13th annual ACM
international conference on multimedia. Hilton, Singapore,
pp 677–682
27. Ekman P (1982) Emotion in the human face, 2nd edn. Cambridge
University Press, Cambridge, MA
28. Essa IA, Pentland AP (1997) Coding, analysis, interpretation and
recognition of facial expressions. IEEE Trans Pattern Anal Mach
Intell 19(7):757–763
29. Partala T, Surakka V, Vanhala T (2006) Real-time estimation of
emotional experiences from facial expressions. Interact Comput
18(2):208–226
30. Cohen I, Sebe N, Chen L, Garg A, Huang TS (2003) Facial
expression recognition from video sequences: temporal and static
modelling. Comput Vis Image Understand 91(1–2):160–187
31. Oliver N, Pentland A, Berard F (2000) LAFTER: a real-time face
and lips tracker with facial expression recognition. Pattern Re-
cogn 33(8):1369–1382
32. Smith E, Bartlett MS, Movellan J (2001) Computer recognition of
facial actions: a study of co-articulation effects. In: Proceedings
of the eighth annual joint symposium on neural computation
33. Den Uyl MJ, van Kuilenburg H (2005) The FaceReader: online
facial expression recognition. In: Proceedings of measuring
behaviour. Wageningen, The Netherlands, pp 589–590
34. Ekman P, Friesen WV (1977) Manual for the facial action coding
system. Consulting Psychologists Press, Palo Alto, CA
35. Bent¸a K-I, Cremene M, Todica V (2009) Towards an affective
aware home. In: Mokhtari M et al. (eds) ICOST 2009, LNCS
5597, pp 74–81
36. Truong KP, Neerincx MA, Van Leeuwen DA (2008) Measuring
spontaneous vocal and facial emotion expressions in real world
environments. In: Proceedings of MB 2008. Maastricht, The
Netherlands, pp 170–171
37. Cassell J, Miller P (2007) Is it self-administration if the computer
gives you encouraging looks? In: Conrad FG, Schober MF (eds)
Envisioning the survey interview of the future. Wiley, New York,
pp 161–178
38. Goleman D (1995) Emotional intelligence. Bantam Books, New
York
39. Bower G (1992) How might emotions affect learning? In: Sve-
nake C, Lawrence E (eds) Handbook of emotion and memory:
research and theory. Erlbaum, Hillsdale, NJ
40. EconomidesAA, Moridis CN (2008) Adaptive self-assessmenttrying
to reduce fear. In: Proceedings first international conference on
advances in computer-human interaction. ACHI 2008, IEEE Press
41. Yusoff MZ, Du Boulay B (2009) The integration of domain
independent strategies into an affective tutoring system: can
students’ learning gain be improved? Electron J Comput Sci Inf
Technol 1(1):23–30
42. Achebe C (1982) Multi-modal counselling for examination fail-
ure in a Nigerian university: a case study. J Afr Stud 9:187–193
43. Thompson T (2004) Failure-avoidance: parenting, the achieve-
ment environment of the home and strategies for reduction’’.
Learn Instruct 14(1):3–26
44. Economides AA (2005) Personalized feedback in CAT (Computer
Adaptive Testing). WSEAS Trans Adv Eng Educ 2(3):174–181
45. Efklides A, Volet S (2005) Feelings and emotions in the learning
process. Learn Instruct 15(5):1–10
46. Economides AA (2006) Emotional feedback in CAT (Computer
Adaptive Testing). Int J Instruct Technol Dist Learn 3(2).
Available online at: http://itdl.org/Journal/Feb_06/article02.htm
47. Economides AA (2006) Adaptive feedback characteristics in
CAT (Computer Adaptive Testing). Int J Instruct Technol Dist
Learn 3(8):15–26
Pers Ubiquit Comput
123
... One approach compares machine coding with other coding procedures. For example, studies of FaceReader™ versus human coders show accuracy rates for recognizing basic emotions of 89% for FaceReader™ 1.0 (den Uyl & van Kuilenberg, 2005) and 88% for FaceReader™ 6.0 (Lewinski, den Uyl, & Butler, 2014), with results corroborated by some recent studies (e.g., Lewinski et al., 2014;Terzis et al., 2010Terzis et al., , 2013. However, FaceReader™ seems to perform better in recognizing joy and neutral faces (Lewinski, 2015) than negative emotions such as anger and disgust (Terzis et al., 2013(Terzis et al., , 2013. ...
... For example, studies of FaceReader™ versus human coders show accuracy rates for recognizing basic emotions of 89% for FaceReader™ 1.0 (den Uyl & van Kuilenberg, 2005) and 88% for FaceReader™ 6.0 (Lewinski, den Uyl, & Butler, 2014), with results corroborated by some recent studies (e.g., Lewinski et al., 2014;Terzis et al., 2010Terzis et al., , 2013. However, FaceReader™ seems to perform better in recognizing joy and neutral faces (Lewinski, 2015) than negative emotions such as anger and disgust (Terzis et al., 2013(Terzis et al., , 2013. Brodny et al. (2016) showed FaceReader™ had better accuracy in recognizing discrete emotions from pictures (Cohn-Kanade dataset; average accuracy was 78%) than from videos (MMI dataset; average accuracy rate was 56%); joy and surprise were detected more accurately than anger, sadness, disgust, and neutral for both stimuli. ...
... For example, studies of FaceReader™ versus human coders show accuracy rates for recognizing basic emotions of 89% for FaceReader™ 1.0 (den Uyl & van Kuilenberg, 2005) and 88% for FaceReader™ 6.0 (Lewinski, den Uyl, & Butler, 2014), with results corroborated by some recent studies (e.g., Lewinski et al., 2014;Terzis et al., 2010Terzis et al., , 2013. However, FaceReader™ seems to perform better in recognizing joy and neutral faces (Lewinski, 2015) than negative emotions such as anger and disgust (Terzis et al., 2013(Terzis et al., , 2013. Brodny et al. (2016) showed FaceReader™ had better accuracy in recognizing discrete emotions from pictures (Cohn-Kanade dataset; average accuracy was 78%) than from videos (MMI dataset; average accuracy rate was 56%); joy and surprise were detected more accurately than anger, sadness, disgust, and neutral for both stimuli. ...
Article
Definitions and measures of discrete emotions are thorny issues in research on affect, resulting from the multitude of emotion theories and approaches to this concept; there is no gold standard for measures of discrete emotions. The utility of measurement methods should be compared across multiple perspectives to allow some degree of cumulativeness. This study reported emotion data collected from a college sample (N = 113), using seven professionally produced videos as stimuli messages. Fear, anger, sadness, disgust, and happiness were measured with the self-report method and by analyzing recorded facial expressions with FaceReader™, an FACS-based computer software that automatically analyzes facial expressions for discrete emotions. Multilevel modeling analyses demonstrated initial evidence for the correspondence between emotions measured with both methods and convergent and discriminant validity for FaceReader™ as a method of measuring discrete emotions.
... As such, facial expression recognition can capture initial responses during exposure to an emotionally charged stimulus (Noordewier & Van Dijk, 2018). Therefore, estimating emotional experiences from objectively measured facial expressions has gradually become an important research area (Terzis et al., 2013). Several machine vision techniques using video cameras are now the predominant research methods in measuring facial expressions (Terzis et al., 2013), including FaceReader (Noldus, 2012, which is often applied in psychology research (e.g., Kostyra et al., 2016;Lewinski et al., 2014;Noordewier & van Dijk, 2018). ...
... Therefore, estimating emotional experiences from objectively measured facial expressions has gradually become an important research area (Terzis et al., 2013). Several machine vision techniques using video cameras are now the predominant research methods in measuring facial expressions (Terzis et al., 2013), including FaceReader (Noldus, 2012, which is often applied in psychology research (e.g., Kostyra et al., 2016;Lewinski et al., 2014;Noordewier & van Dijk, 2018). ...
... Importantly, self-reports or direct methods offer subjective reports of the emotional component, thereby potentially suffering from cognitive biases, because they measure only the perception of an emotional reaction. In contrast, physiological or indirect methods can measure automatic emotional reactions in realtime without cognitive bias (Terzis et al., 2013). Interestingly, direct and indirect measurements do not always converge (Kostyra et al., 2016), which indicates that physiological data can detect emotional responses that people might be unconscious or unaware of (Terzis et al., 2013). ...
Thesis
Visual marketers frequently use a variety of motion techniques to display their marketing messages. For instance, products and logos are often shown moving forward or backward in commercials, different types of animation are implemented in advertising and email campaigns, and motion picture advertising is increasingly being created for mobile video viewing. While many visual marketing communications use motion, the rationale for these practices and their effectiveness remains largely underexplored. Prior consumer psychology research mainly focused on perceptions of stimulus distance rather than stimulus motion. As such, designing effective motion techniques in visual marketing messages requires a better understanding of consumers’ reactions to stimuli in motion. Therefore, this doctoral dissertation examines how consumers cognitively, affectively, and behaviorally react to motion techniques in current visual marketing trends, as well as their underlying psychological processes. Chapter I discusses theories on real and implied motion perception as well as biological and non-biological motion perception to provide an overview of prior research on consumers’ cognitive, affective, and behavioral responses to motion. Besides the introductory chapter (Chapter I) and the concluding chapter (Chapter VI), the dissertation contains four empirical chapters investigating consumer responses to four different motion techniques. Specifically, Chapters II and III focus on the effects of two frequently used motion techniques to visually display (advertising) stimuli that have received limited academic attention in the marketing context, that is, approaching and receding stimulus motions. Chapters IV and V investigate two recent visual marketing trends using motion techniques whose impact has not yet been empirically investigated, being GIF marketing and mobile vertical video marketing respectively.
... This solution has been already used in many different contexts related to experimental research in consumer behaviour. There are already several scientific studies showing how the use of automated facial analysis of expressions provides positive results in assessing CX He et al., 2012;Terzis et al., 2013;Danner et al., 2014;El Haj et al., 2017;Noordewier and van Dijk, 2019;Riem and Karreman, 2019;Meng et al., 2020). Recently, new pioneering studies presented by the scientific literature have shown the possibility to take advantage of face orientation aside from facial expression to predict the hedonic impact of the face presentation of models, as the facial orientation to the right-side significantly predicts with a more negative evaluation, while on the opposite, face orientation towards left side significantly correlates with a positive evaluation of the models' face presentation (Park et al., 2021). ...
... The neural network of the system has been trained, taking advantage of a high-quality correlation of approximately 10,000 images that were manually annotated by real human expert coders. The average scores of performances reported are 89% Van Kuilenburg et al., 2005) and 87% (Terzis et al., 2013). We consider in this study in the present project results only about "happiness, " as the accuracy of this specific emotion is the highest in comparison to all other emotions according to the scientific literature (Lewinski et al., 2014a,b;Stöckli et al., 2018;Dupré et al., 2020). ...
Article
Full-text available
This research project has the goal to verify whether the application of neuromarketing techniques, such as implicit association test (IAT) techniques and emotional facial expressions analyses may contribute to the assessment of user experience (UX) during and after website navigation. These techniques have been widely and positively applied in assessing customer experience (CX); however, little is known about their simultaneous application in the field of UX. As a specific context, the experience raised by different websites from two well-known automotive brands was compared. About 160 Italian university students were enrolled in an online experimental study. Participants performed a Brand Association Reaction Time Test (BARTT) version of the IAT where the two brands were compared according to different semantic dimensions already used in the automotive field. After completing the BARTT test, the participants navigated the target website: 80 participants navigated the first brand website, while the other half navigated the second brand website (between-subject design). During the first 3 min of website navigation, emotional facial expressions were recorded. The participants were asked to freely navigate the website home page, look for a car model and its characteristics and price, use the customising tool, and in the end, look for assistance. After the website navigation, all the participants performed, a second time, the BARTT version of the IAT, where the two brands were compared again, this time to assess whether the website navigation may impact the Implicit Associations previously detected. A traditional evaluation of the two websites was carried on by means of the classic heuristic evaluation. Findings from this study show, first of all, the significant results provided by neuromarketing techniques in the field of UX, as IAT can provide a positive application for assessing UX played by brand websites, thanks to the comparison of eventual changes in time reaction between the test performed before and after website navigation exposure. Secondly, results from emotional facial expression analyses during the navigation of both brand websites showed significant differences between the two Mauri et al. Measuring Emotions Is User Experience brands, allowing the researchers to predict the emotional impact raised by each website. Finally, the positive correlation with heuristic evaluation shows that neuromarketing can be successfully applied in UX.
... AFC extracts movement from transient facial features (i.e., AU activity), its scores correspond well with those from trained human FACS coders [14][15][16]. In addition to the measurement of AU activities, AFC software operates with machine learning procedures that are trained to classify different emotion categories. ...
Article
Full-text available
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
... Still, the advantages in most cases outweigh the cost, making this method increasingly popular, as seen in studies within the science of emotion [77,78], educational research [79,80], human-computer interaction [81], consumer behavior, user experience [82], clinical investigations of facial nerve grading in medicine [83], monitoring pain [81] and advertising and commercial films [84,85]. ...
Article
Full-text available
To learn about extreme sports and what motivates such activities, we need to understand the emotions embedded in the experience itself. However, how we go about assessing these emotions might provide us with very different answers. An experience is a fleeting and ever-changing phenomenon, rich in detail and filled with nuances. What we remember and, therefore, what we are able to report from our experience might, however, be strikingly different to what we experienced. Our memories are grained by time, impaired by arousal, and affected by context. Despite these limitations, the most common way to measure an experience is by self reporting. The current paper reviews some of the relevant theory on emotions and how this might impact different assessments. I also describe a new way of measuring momentary emotions in the field by use of video cameras and automatic coding of facially expressed emotions. Extreme sports may leave us with positive memories but may be anything but pleasant while in the midst of them. In the end, this paper may give some hints to why.
... What is also interesting is that we did not find any LREs detected via camera-based input, revealing the research gap in the educational context. However, camera-based input has been used in several computer-based tasks to predict students' emotions, for instance, Terzis et al. (2013) achieved 87% accuracy for predicting students' emotions of anger, sadness, and neutral state during a computer-based assessment (CBA) task, solely using camera input. In Xiao and Wang (2015) the authors applied an DT approach and the PPG modality to detect the emotions of boredom and confusion. ...
Article
This paper aims to provide the reader with a comprehensive background for understanding current knowledge on the use of non-intrusive Mobile Sensing methodologies for emotion recognition in Smartphone devices. We examined the literature on experimental case studies conducted in the domain during the past six years (2015-2020). Search terms identified 95 candidate articles, but inclusion criteria limited the key studies to 30. We analyzed the research objectives (in terms of targeted emotions), the methodology (in terms of input modalities and prediction models) and the findings (in terms of model performance) of these published papers and categorized them accordingly. We used qualitative methods to evaluate and interpret the findings of the collected studies. The results reveal the main research trends and gaps in the field. The study also discusses the research challenges and considers some practical implications for the design of emotion-aware systems within the context of Distance Education.
... In a study, FaceReader TM was used to assess the instant facial expressions of students with an efficacy of 87%. Students were found to experience neutral, sad and angry emotions during assessment [17]. However, this technology can generate noise within the data, which can give false assessments [18]. ...
Article
Full-text available
Hedonic scale testing is a well-accepted methodology for assessing consumer perceptions but is compromised by variation in voluntary responses between cultures. Check-all-that-apply (CATA) methods using emotion terms or emojis and facial expression recognition (FER) are emerging as more powerful tools for consumer sensory testing as they may offer improved assessment of voluntary and involuntary responses, respectively. Therefore, this experiment compared traditional hedonic scale responses for overall liking to (1) CATA emotions, (2) CATA emojis and (3) FER. The experiment measured voluntary and involuntary responses from 62 participants of Asian (53%) versus Western (47%) origin, who consumed six divergent yogurt formulations (Greek, drinkable, soy, coconut, berry, cookies). The hedonic scales could discriminate between yogurt formulations but could not distinguish between responses across the cultural groups. Aversive responses to formulations were the easiest to characterize for all methods; the hedonic scale was the only method that could not characterize differences in cultural preferences, with CATA emojis displaying the highest level of discrimination. In conclusion, CATA methods, particularly the use of emojis, showed improved characterization of cross-cultural preferences of yogurt formulations compared to hedonic scales and FER.
Article
Background/Purpose: Quantification of consumer interest is an interesting, innovative, and promising trend in marketing research. For example, an approach for a salesperson is to observe consumer behaviour during the shopping phase and then recall his interest. However, the salesperson needs unique skills because every person may interpret their behaviour in a different manner. The purpose of this research is to track client interest based on head pose positioning and facial expression recognition. Objective: We are going to develop a quantifiable system for measuring customer interest. This system recognizes the important facial expression and then processes current client photos and does not save them for later processing. Design/Methodology/Approach: The work describes a deep learning-based system for observing customer actions, focusing on interest identification. The suggested approach determines client attention by estimating head posture. The system monitors facial expressions and reports customer interest. The Viola and Jones algorithms are utilized to trim the facial image. Findings/Results: The proposed method identifies frontal face postures, then segments facial mechanisms that are critical for facial expression identification and creating an iconized face image. Finally, the obtained values of the resulting image are merged with the original one to analyze facial emotions. Conclusion: This method combines local part-based features with holistic facial information. The obtained results demonstrate the potential to use the proposed architecture as it is efficient and works in real-time. Paper Type: Conceptual Research.
Article
Full-text available
A pesar de su relevancia para la comprensión de la expresión emocional vocal, el estudio de la risa contagiosa se encuentra en sus primeras etapas de investigación y aún no se ha establecido su naturaleza ni la de las respuestas que esta provoca. Teniendo esto en cuenta, el propósito de este estudio fue determinar si los estímulos acústicos de risa contagiosa, además de generar conductas de risa o sonrisa, provocan en los oyentes las expresiones faciales, electromiográficas y cardíacas de una emoción positiva. Para esto, se contó con la participación de 60 universitarios de ambos sexos con edades entre los 18 y los 30 años en un diseño experimental intrasujeto con mediciones en la condición de línea de base y en exposiciones a diferentes estímulos de risa contagiosa, donde se verificaron tres hipótesis en las que se comparó expresiones faciales de alegría (medidas con el software FaceReader), amplitud electromiográfica (emg) del músculo cigomático mayor (medida con el módulo emg-100 del Biopac) e intervalos R-R como indicadores de frecuencia cardíaca (medidos con el módulo ecg-100 del Biopac) entre las diferentes condiciones. Como resultado, se encontraron diferencias significativas en los porcentajes de las expresiones faciales de alegría y amplitud emg del cigomático al comparar las condiciones de línea de base y estímulos de risa más contagiosa, y de risas más y menos contagiosas; no obstante, no se encontraron diferencias significativas en los intervalos R-R en ninguna de las condiciones comparadas. Como conclusión, se comprobó la naturaleza emocional positiva de la risa/sonrisa provocada por estímulos de risa contagiosa y la proporcionalidad entre la intensidad de las expresiones faciales y las respuestas emg elicitadas por esta risa y el grado de contagio percibido de la misma.
Chapter
Contemporary emotion theories have come to conceptualize emotions as multi-component and dynamic phenomena. Central to this dynamical perspective is that emotions are viewed as series of dynamic and recursive events that unfold over time, rather than single discrete responses. This chapter describes the architecture of emotions, decomposed into more elementary constituents including antecedent parts such as appraisal, and response parts like physiological responses, facial expressions, and the subjective experience or conscious feeling. The chapter mainly focuses on food-related emotions. Several methods are explained that allow measurement of the temporal dynamics of emotion components, including mapping of the relevant time course variables. The chapter concludes with a discussion of the main implications of measuring temporal dynamics of emotion responses.
Article
Full-text available
chapter in S. A. Christianson (Ed.) The handbook of emotion and memory. Erlbaum, 1992.
Article
Full-text available
Interpolation methods have previously been found to be effective for handling coarticulation effects in speech recognition when there is insufficient data to reliably train models for all combinations of phonemes. These interpolation models employed Hidden Markov Models (HMM's), trained on one output class at a time. Here, a neural network analog of the HMM interpolation methods is discussed and applied to the problem of analyzing facial expressions. The task was to recognize facial actions defined in the Facial Action Coding System (Ekman & Friesen, 1978). This system defines 46 component actions that comprise facial expressions, and are the analog of phonemes in facial expression. As in speech, there are thousands of "words" that the face can express (Scherer & Ekman, 1982). The network demonstrated robust recognition for the six upper facial action units, whether they occurred individually or in combination.
Article
1. Introduction The study of emotion Types of evidence for theories of emotion Some goals for a cognitive theory of emotion 2. Structure of the theory The organisation of emotion types Basic emotions Some implications of the emotions-as-valenced-reactions claim 3. The cognitive psychology of appraisal The appraisal structure Central intensity variables 4. The intensity of emotions Global variables Local variables Variable-values, variable-weights, and emotion thresholds 5. Reactions to events: I. The well-being emotions Loss emotions and fine-grained analyses The fortunes-of-others emotions Self-pity and related states 6. Reactions to events: II. The prospect-based emotions Shock and pleasant surprise Some interrelationships between prospect-based emotions Suspense, resignation, hopelessness, and other related states 7. Reactions to agents The attribution emotions Gratitude, anger, and some other compound emotions 8. Reactions to objects The attraction emotions Fine-grained analyses and emotion sequences 9. The boundaries of the theory Emotion words and cross-cultural issues Emotion experiences and unconscious emotions Coping and the function of emotions Computational tractability.