ArticlePDF Available


This paper aims at illustrating diversity of possible emotion recognition applications. It provides concise review of affect recognition methods based on different inputs such as biometrics, video channel or behavioral data. It proposes a set of research scenarios of emotion recognition applications in the following domains: software engineering, website customization, education, and gaming. The scenarios show complexity and problems of applying affective computing in different domains. Analysis of the scenarios allows drawing some conclusions on challenges of automatic recognition that have to be addressed by further research.
Emotion recognition and its applications
A. Landowska, M. Szwoch, W. Szwoch, M.R. Wróbel, A. Kołakowska
Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology,
Gdańsk, Poland
{nailie, szwoch, wszwoch, wrobel, agatakol}
Abstract. The paper proposes a set of research scenarios to be applied in four do-
mains: software engineering, website customization, education and gaming. The
goal of applying the scenarios is to assess the possibility of using emotion recogni-
tion methods in these areas. It also points out the problems of defining sets of
emotions to be recognized in different applications, representing the defined emo-
tional states, gathering the data and training. Some of the socenarios consider pos-
sible reactions of an affect-aware system and its impact on users.
1 Introduction
Inevitably feelings play an important role no only in our relations with other peo-
ple but also in the way we use computers. Affective computing is a domain that
focuses on user emotions while he interacts with computers and applications. As
emotional state of a person may influence concentration, task solving and decision
making skills, affective computing vision is to make systems able to recognize and
influence human emotions in order to enhance productivity and effectiveness of
working with computers.
A challenging problem of automatic recognition of human affect has become a
research field involving more and more scientists specializing in different areas
such as artificial intelligence, computer vision, psychology, physiology etc. Its
popularity arises from vast areas of possible applications. In this paper we focus
on application of affective computing methods in software engineering, website
customization, education and gaming. We propose a number of research scenarios
to evaluate the possibility of using emotion recognition methods in these areas.
Section 2 briefly presents the idea and the methods of emotion recognition.
Sections 3-6 describe the proposed scenarios to be applied in four different
domains, including software engineering, website customization, education and
gaming. Section 7 draws some conclusions on challenges, which have to be solved
to implement the scenarios in a real enironment.
This is a preprint of the following paper:
Kolakowska, A.; Landowska, A.; Szwoch, M.; Szwoch, W.; Wrobel, M.R.:
Emotion recognition and its applications, W: Human-Computer Systems Inter-
action. Backgrounds and Applications vol. 3, Advances in Soft Computing se-
ries, Springer-Verlag Co, pp. 51-62, DOI: 10.1007/978-3-319-08491-6_5
2 Emotion recognition
The goal of human emotion recognition is to automatically classify user’s tem-
poral emotional state basing on some input data. There are dozens of definitions of
emotions [Picard 2004], and in this paper we adopt the following distinction based
on time: emotion is a reaction to stimuli that lasts for seconds or minutes, mood is
an emotional state that lasts for hours or days and personality is an inclination to
feel certain emotions. We use the term ‘emotional state’ to indicate current (te m-
porary) state of a person irrespective of its origin (stimuli, mood or personality).
The stated goal may be achieved by one of many types of classifiers developed
in the field of pattern recognition. The approach assumes several stages of classi-
fier’s construction which are: data acquisition and feature extraction, creation of
the training set containing labeled data and classifier’s learning.
In our research we assume that emotion recognition will be based on multi-
modal inputs: physiolgical sensors, video, depth sensors and standard input devic-
es. This approach proved to provide more information to the recognition process
as different data channels deliver valuable complementary information eliminating
potential drawbacks of any individual input [Kapoor and Picard 2005].
Physiological sensors might be used to measure skin conductance, blood vol-
ume pulse, muscle impulses, respiratory signal, temperature or heart rate [Szwoch
W 2013]. They are non-invasive but sometimes intrusive or not comfortable for
users due to special equipment, which is required. Thus it is not possible in real-
life situations when we want to determine emotions of people during usual learn-
ing or working processes. But taking specialized measurements of physiological
signals can be used in some scenarios as well as for the enhancement or verifica-
tion of a classifier’s accuracy. A great many features may be extracted from phys-
iological signals by calculating their mean, standard deviation, difference, Fourier
transform, wavelet transform, high frequency and low frequency power, entropy,
thresholding, peak detection etc. [Jerritta et al. 2011].
Video and depth sensors deliver us significant information on facial expression
in a non intrusive way by using camera. Face expression may be represented by
geometric or appearance features, parameters extracted from transformed images
such as eigenfaces, dynamic models and 3D models. The difficulty of this ap-
proach is the need of image preprocessing and complex pattern recognition algo-
rithms. One of the main problems with facial expression recognition is that it usu-
ally works well only in the case of a posed behavior and proper lightning [Szwoch
M 2013]. Depth sensors, which usually use non-visual infrared light technology,
are generally resistant to insufficient and uneven lighting conditions. Since intro-
ducing Microsoft Kinect depth sensors are also used for recognition of human
poses, gestures and movements [Ren et al. 2011].
Standard input devices, such as keyboard and mouse, enable a completely un-
obtrusive way of collecting data, because no special hardware is needed and
moreover it may be done during users’ usual computer activities. Features extract-
ed from keystrokes may be divided into timing and frequency parameters. Mouse
characteristics include both clicking and cursor movement measurements
[Kołakowska 2013].
The collected data has to be labeled, i.e. an emotion has to be assigned to each
data sample. Most of the emotion recognition algorithms use discrete or
dimensional models for affective state representation. The best recognized discrete
model is Ekman six basic emotions, including: joy, sadness, fear, anger, disgust
and surprise, whereas a three-dimensional PAD model proposes to represent an
emotional state as a combination of valence, arousal and dominance values. Usu-
ally the labels for the training data are assigned according to specially designed
questionnaires given to the users or on the base of independent observers’ evalua-
tions. However such labeling may not be objective, which in turn may result in
poor accuracy of the trained system. To avoid this situation we are going to vali-
date the labels provided by humans with the labels assigned on the base of physio-
logical measurements.
A great many machine learning algorithms have been already applied in the task
of emotion recognition, e.g. SVM, decision trees, linear discriminant analysis,
Bayesian networks, naive Bayes, neural networks [Zeng et al. 2009]. An ideal
emotion recognition method in the proposed real-life applications would be a
combination of adaptive classifiers which could cope with high number of features
of different types and would be able to improve its effectiveness with increasing
amounts of training data continuously recorded during users’ typical activities.
3 Software engineering
The main purpose of this part of our research is to use emotion recognition based
on multimodal inputs to improve some aspects of software engineering process
and to overcome the limitations of usability questionnaires [Kołakowska et al.
2013]. We focus on two areas of application in software engineering: usability
testing and development process improvement.
3.1 Extended software usability testing
There is a lot of evidence, that human emotions influence interactions with
software products. There is also a record of investigation on how products can in-
fluence human feelings [Hill 2009] and those feelings make people buy or not.
Therefore investigating emotions induced by products is an object of interest of
designers, investors, producers and customers, as well.
Software usability depends on multiple quality factors, such as functionality,
reliability, interface design, performance and so on. All of the quality indicators
can be improved indefinitely, but there is a point to stop optimizing it is a cus-
tomer satisfaction. Measuring this satisfaction with questionnaires may be mis-
leading. We propose to extend usability testing with emotion recognition based on
multimodal inputs. We have defined the following test scenarios with required
emotional state distinctions depending on the scenario.
Scenario 1. First Impression test
First impression is a state that is evoked mainly by visual input in human-
systems interaction and is created in a very short time (approximately 5 seconds).
Research shows, that in web page design first impression is a good predictor of
10-minute usability opinion [Linggaard et al. 2006]. Many of the websites will not
have any more time to make an impression than these 5 seconds the first impres-
sion makes the users stay or quit. In first impression testing the most important
distinction is to differentiate user’s interest (excitement) from boredom or disgust.
This scenario is especially dedicated to web page usability testing.
Scenario 2. Task-based usability test
The second usability scenario proposed uses cognitive walkthrough method
[Blackmon et al. 2002], which is a task-based approach. Software usability evalu-
ation in this method usually involves identification of typical tasks (which may be
extracted from use case models) and the optimal processes for performing them
(possibly derived from dynamic models, user stories or user instructions). The rep-
resentative user group performs the tasks in a controlled environment with camera
recording, biometric sensors and keystroke analysis tools. Registered channels are
then a subject to further analysis of usability and emotional state fluctuation. This
scenario is dedicated rather to applications designed to help the user to perform
specific tasks and not for entertainment or content access systems. The purpose of
emotion recognition in taskbased usability testing is to differentiate frustration
from empowerment.
Scenario 3. Free interaction test
The third usability scenario proposed is based on free interaction with applica-
tion, which is supposed to evaluate overall user experience. There are no pre-
defined tasks to be performed by representative user group; instead they are asked
to freely interact with application under examination. This scenario is dedicated
for entertainment and content access systems, but other applications may also ben-
efit. The objective of emotion recognition in this scenario is the distinction of en-
gagement from discouragement.
Scenario 4. Comparative test
Comparative scenario is a selection or combination of methods used in previ-
ously defined scenarios performed on two software versions or on the application
and the main competitive software product.
3.2 Development process improvement
In each segment of the job market the most valuable employees are those who are
highly productive and deliver high-quality products or services. A similar situation
is with respected software developers [Wróbel 2013]. Employers require high
work efficiency and high quality code. Unfortunately, these two requirements are
often in conflict, as a computer program developed under time pressure is usually
of low quality [MacCormack et al. 2003].
The purpose of this study is to verify the hypothesis that emotions have signifi-
cant impact on software quality and developers’ productivity. The aim is to an-
swer the question on correlation between employee's emotional state and his work
efficiency as well as quality of the developed software. The study will also deter-
mine the emotional states of IT professionals that support their work.
We have defined four research scenarios to explore multiple factors of the rela-
tionship between programmers’ emotional states and their work, including the
work environment, personal productivity and quality of developed code.
Scenario 5. IDE usability comparison
This scenario is an adoption of task-based usability test described in scenario 2.
Integrated development environments (IDE) are one of the essential tools used by
developers. Their advantages and disadvantages can significantly affect the emo-
tional state of the users. Research will be conducted in a laboratory environment
with biometric sensors. User group will be represented by both novice program-
mers - ICT students and ICT stuff with years of experience. The object of the re-
search will be a set of popular IDEs. A developer will perform a series of pro-
gramming tasks, such as compiling, debugging, refactoring, on three randomly
selected environments, excluding those he uses the most frequently. Tests will
evaluate the quality of those IDEs. However, the collected data will be used to in-
vestigate the individually differentiated impact of problems encountered in an IDE
on developers' emotions. The goal of this scenario is to distinguish between the
frustration and empowerment.
Scenario 6. Productivity and emotions
This scenario is designed to answer the question of whether and how emotional
state affects the productivity of the programmer. The research will be conducted in
a laboratory environment. Behavior of the maliciously prepared environment will
evoke developers’ emotions that may affect their productivity, measured for ex-
ample by the number of lines of code per hour of work. In the first place stress as-
sociated with time pressure and boredom will be induced. The analysis of the col-
lected data will determine the optimal emotional space for developer productivity.
Scenario 7. Code quality and emotions
This scenario, despite similarities to the previous one, should not be conducted
in laboratory environment. It is hard to accurately evaluate the quality of the code
developed in a short test. Therefore, to provide the reliable results, this scenario
requires continuous monitoring of the emotional state of the programmer and the
collection of incremental versions of the source code. Only the cross-examination
of emotional states and source codes may lead to the designation of the correlation
between quality and emotions. In this scenario, it is essential to detect emotions
such as empowerment, frustration and stress.
Scenario 8. Tele and office working comparison
The last scenario is designed to detect whether there are differences in emo-
tional states of programmers when working in office or at home. The number of
telecommuters is growing rapidly in recent years. This research should be con-
ducted in real work environments. This will be possible only after the develop-
ment of reliable, non-intrusive methods of user emotional states recognition. The
objective emotion recognition is to detect the whole range of emotions, particular-
ly all those identified in the previous scenarios.
The scenarios 5 and 6 can be conducted in a laboratory environment. In this re-
search, it is possible to use a biometric sensor to detect emotions of programmers.
This will deliver more accurate recognition of emotions than with the previously
developed non-intrusive methods. However, the implementation of the other two
scenarios (7 and 8) will be possible only using non-intrusive methods for detect-
ing the emotions of computer users.
As the computer is the primary working environment of programmer, the im-
plementation of emotions recognition mechanisms in human-computer interface is
a natural choice. However research, as well as proposed scenarios (except scenario
5), are sufficiently universal to be applied to many professions.
4 Education and e-education
There is lots of evidence, that some emotional states support learning processes
and other suppress them [Hudlicka 2003, Picard 2003, Sheng et al. 2010]. The dis-
tinction of the two groups of emotional states in some cases is not obvious, for ex-
ample such positive mood as hilarity is not good for learning processes, while
slightly negative emotional states foster critical thinking and are appropriate for
analytical tasks [Landowska 2013]. Automatic emotion recognition algorithms can
help to explore this phenomena by making assessments of learner emotional states
more objective than typical questionnaire-based investigations.
Scenario 1. Emotional templates of educational tasks.
The purpose of this scenario is investigation on emotional states that occur dur-
ing different types of educational tasks. This investigation aims at identification of
emotional templates of educational tasks, that can be defined as distinguishable
sets of effective and counter-productive emotional states for solving specific task
types. To perform this investigation representative set of educational tasks should
be prepared and both learners’ performance in task execution and his/her emotion-
al state must be measured. Analysis of the correlation between performance and
emotional states would enable to justify statements on effective and counter-
productive emotions for specific task types, however a significant number of re-
spondents should be engaged in order to make the thesis reliable. Information on
effective emotional states can be then used in educational problems diagnosis, ed-
ucational tool design or in further exploration of educational processes.
Scenario 2. Emotional stereotypes of learners.
Emotionality is one of the elements of human personality and may differ signif-
icantly based on in-born temper, previous experience and socialization process.
However some emotional reactions are common for people living in one culture or
having the same experience and similar characteristics is expected in educational
processes. Learner affective stereotype is a definition of typical emotional states
that might be observed in educational settings. It is expected, that novice learners
will more frequently show symptoms of frustration, while more experienced ones
could feel boredom. To support that thesis with evidence, emotional states of dif-
ferent (novice/experienced) students will be measured and recognized while they
perform the same tasks set of growing difficulty. Learners’ stereotypes can be then
used in e-educational environments to adapt learning paths and/or interaction
models, when no individual information on user is available.
Scenario 3. Evaluation of educational resources.
The goal of this scenario is evaluation of educational resources, especially
those prepared for self-learning. In distance and electronic education one of the
critical success factors is learner discipline in following provided learning path.
When one fails to deal with fluctuation of motivation and attention, learning pro-
cesses are paused or even abandoned. One of the frequently launched cause for
course resignation is: “Boring resources”. In this scenario observation of student’s
interaction with resources is combined with monitoring his/her emotional state in
order to identify parts of resources that cause boredom. That information may be
then used to remove weak points and improve overall resource quality. A set of
different types of educational resources including recorded lectures, screencasts
and interactive materials will be investigated. This scenario might be also used for
quality evaluation of resources provided in virtual universities and other distance
learning environments.
Scenario 4. Usability testing of educational tools.
In this scenario usability of educational tools is evaluated. Software usability
tests are usually based on eye-tracking techniques and we propose to extend it
with user emotion recognition, which can be a valuable information while evaluat-
ing user experience [Kołakowska et al. 2013]. Typical tasks performed with edu-
cational tools include: educational tool access, resource search, resource launch,
performing interactive tasks, viewing results or feedback information, communi-
cation with teachers or class mates and more. More specific task description for
the scenario will be performed using cognitive walkthrough method [Blackmon et
al. 2002]. Then representative group of students will be asked to perform tasks in
controlled environment that will additionally record and recognize their emotional
states. Information on affect and its fluctuations (especially identification of frus-
tration) can help to improve software products that are designed to support lear n-
ing processes.
5 Enhanced websites customization
With the grow of the Internet, service providers collect more and more infor-
mation about their users. Based on these data, content, layout and ads are dis-
played according to the user's profile. Adding information about the emotions of
users could provide more accurate personality models of the users.
We have defined two scenarios, the first to explore how emotions of web surf-
ers influence their behavior on websites, the second to determine what emotions
are triggered by different types of on-line ads.
Scenario 1. Affective customization
The purpose of the scenario is to determine how emotions affect the way users
consume information on the websites. The main expected outcome of this investi-
gation is a list of factors that, in conjunction with a specific mood, increase the
time spent on the website. The study will examine the impact of the following fac-
tors: page layout, content and photo sets. Users are intended to review a set of
prepared web pages (with different values of factors). Based on biometric sensors
and cameras their mood will be recognized. These data will be aggregated with in-
formation about the time spent on each site. Analysis of the results will help to de-
termine for example which website layout would interest bored person and which
is the best for angry one.
Scenario 2. Advertisement reaction model
The revenue model for a significant number of websites is based on the on-line
advertising [Dubosson-Torbay et al. 2002]. However users describe them as unin-
formative, ineffective and very intrusive [McCoy et al. 2007].
The aim of this scenario is to find the most eye-catching and interesting adver-
tisement types for different groups of Internet users. In the laboratory environ-
ment, the emotional reaction will be measured in response to various formats of
on-line ads. Additionally, using eye-tracking tool, information about the user fo-
cus on advert will be collected. Set of information about time elapsed before notic-
ing the ad's, user emotional response and duration of focused attention will allow
to choose the appropriate type of advertising, depending on the target audience.
6 Video games
Video games belong to the wide area of entertainment applications. Thus, assum-
ing the existence of human emotions and in fact basing on them, they attempt to
make the player to become emotionally attached with them. As the primary goal
of a video game is to entertain the player [Adams 2009], each video game try to
allow the player to fulfill his or her “dream”. Standard video games try to do it in
different ways depending on their genre and involving such elements as good
gameplay, immersing storytelling, novelty, graphics and so on [Adams 2009].
Although video games belong to applications in which emotions naturally play
an important role, only few of them try to incorporate their players’ affective state
into the gameplay. Such games can be referred as affective or more properly af-
fect-aware games. The importance of affect in delivering engaging experiences in
entertainment and educational games is well recognized. Potential profits for a f-
fect-aware video games are not to be underestimated. Unfortunately, this “affect-
awareness” is usually “statically” built-in the game at its development stage bas-
ing on the assumed model of so called representative player [Adams 2009]. There
are two problems with such attitude. Firstly, each player differs in some way from
that averaged model. Secondly, and more important, player’s affect state can
change even radically from session to session making almost impossible to predict
the current user emotions at the development stage.
There are a lot of reasons that can influence upon human’s behavior during
playing the game. They could be divided into factors connected with the game,
such as increasing monotony or becoming accustomed player, and to game inde-
pendent factors which are connected with current physical and mental condition of
the player. The first group of reasons may be in some extent predicted or estimat-
ed by the game designer but that is impossible to the reasons of the other group.
That is why the real-time recognition of player’s affect may become such im-
portant for video games industry in the nearest future. Video games that are able
to dynamically react to the recognized current player’s emotions we can call a tru-
ly affect aware video games.
Scenario 1. Emotional templates in video games
Though generally people are able to express wide spectrum of emotions, not all of
them are observed while playing video games. Expressed emotions depend on
many factors such as game’s genre, player’s genre and experience, and on many
other predictable or even unpredictable factors [Adams 2009]. Moreover, emo-
tional reaction in the situation highly depends on individual personality and even
current mood or unpredictable external factors.
The goal of this scenario is to verify which emotions are common while play-
ing video game and which are rare. The scenario will also allow verifying the hy-
pothesis that for specific game genres and specific group of game players some
expressions are more common then others.
A typical test in this scenario will cover recording of player emotions while
playing several different video games with especially prepared scenes. Additional
questionnaire will allow to classify the player’s age, gender and gaming experi-
Scenario 2. Stimuli adaptation
Games try to attract the player’s attention in the game to tie it as long as
possible with the it. Game developers use interesting story, high quality graphics
and an intense arousal to keep the attention of the player.
The goal of this scenario is to verify, whether long, frequent, repeated
stimulation cause a negative reaction of the player due to stimuli adaptation. After
a specified time, the player can stop to react on stimuli, and it is possible he or she
will be expected more and more powerful experience. Is it necessary to keep the
attention of the player at the highest level, or perhaps we should interlace periods
of intense emotional arousal with periods of silence, which will allow the players
senses to rest.
Scenario assumes that study group of players will be exposed to intense stimuli
during the game. The time after which the player’s reaction to stimuli disappears
and also the moment of weariness of the game will be tested. During the
experiment, the observer and the player will determine the moment at the time or
time period in which there was a loss of response to stimuli.
Scenario 3 Player reaction to external signals on different levels of immersion
Sometimes computer games players so strongly entered into the virtual world,
they stop to notice the real world around them. It is important to monitor the depth
of players’ engagement to detect when they stop responding to external stimuli.
This can help for example to protect players against addiction. When a player is
too absorbed in the play, affect-aware game can reduce its attractiveness, causing
return to real world.
During the experiment player’s reaction to external stimuli will be investigated.
The problem may be to get the appropriate involvement in the game. Player
reaction (rapid / quiet, fast / slow) will determine the degree of his engagement.
During the experiment various types of "disturbing" the player during the game -
for example, verbal expression, noise, etc. will be used.
Scenario 4 The influence of affect feedback on player’s satisfaction
Sometimes a video game can become boring or too stressing for the player.
Detecting such situations would allow giving a proper feedback to the player
changing the current gameplay character. On one hand such capability could
improve satisfaction of experienced players and on the other it could protect
novice or young players from excessive violence. Adapting feedback to stimulate
the player can also be used in therapy in suppressing negative emotions by proper
The purpose of the scenario is to check whether the affect-aware games can
increase players’ satisfaction from playing. For this scenario a specially developed
affect aware video game will be used. The scenario assumes the use of a
questionnaire, in which the player will determine the level of satisfaction within
the game when increasing and decreasing difficulty according to the detected
7 Conclusions
The paper has presented a number of possible applications of emotion recognition
methods in different areas such as software engineering, website customization,
education and gaming. Although some of the presented research scenarios are
ready to be applied, in the case of most of them a few challenging problems still
have to be solved. First of all the limitations of emotion representation models not
always let us precisely describe the actual feelings of a user. We often do not real-
ize the real number of possible emotional states, which should be considered in an
application. Even if we are able to define a set of emotions, the fuzzy nature of
emotional states and their instability along time entail subsequent difficulties.
Moreover the quality of the training data and the way emotion labels are assigned
strongly influence the the results of the training algorithm. Finally the accuracy of
recognition process often does not fulfill the requirements of a system working in
a real environment. All this does not let us ignore high uncertainty of emotion
recognition methods, especially when combining separate models. This is an open
research problem requiring investigation.
[Adams 2009]
Adams E (2009) Fundamentals of Game Design. New Riders Publishing
[Blackmon et al. 2002]
Blackmon M, Polson P, Kitajima M, Lewis C (2002) Cognitive Walkthrough for the Web, In Human
Factors in Computing Systems: Proceedings of CHI 2002. ACM, New York: 463470
[Dubosson-Torbay et al. 2002]
Dubosson-Torbay M, Osterwalder A, Pigneur Y (2002) E-business model design, classification, and
measurements. Thunderbird International Business Review 44(1): 5-23
[Hill 2009]
Hill D (2009) Emotionomics: Winning Hearts and Minds, Kogan Page Publishers, USA
[Hudlicka 2003]
Hudlicka E (2003) To feel or not to feel: The role of affect in human-computer interaction. Internation-
al Journal of Human-Computer Studies 59(1): 1-32
[Jerritta et al. 2011]
Jerritta S, Murugappan M, Nagarajan R, Wan K (2011) Physiological signals based human emotion
recognition: a review. In: Proc. IEEE 7th International Colloquium on Signal Processing and its Appli-
[Kapoor and Picard 2005]
Kapoor A, Picard RW (2005) Multimodal affect recognition in learning environments. In: Proceedings
of the 13th annual ACM International Conference on Multimedia, pp 677 - 682
[Kołakowska et al. 2013]
Kołakowska A, Landowska A, Szwoch M, Szwoch W, Wróbel M (2013) Emotion Recognition and its
Application in Software Engineering. In: Proc 6th International Conference on Human Systems Inter-
action, pp 532-539
[Kołakowska 2013]
Kołakowska A (2013) A review of emotion recognition methods based on keystroke dynamics and
mouse movements. In: Proc 6th International Conference on Human Systems Interaction, pp 548-555
[Landowska 2013]
Landowska A (2013) Affective computing and affective learning methods, tools and prospects,
EduAction. Electronic education magazine (in printing)
[Linggaard et al. 2006]
Linggaard G, Fernandes G, Dudek C, Brown J (2006) Attention web designers: You have 50
miliseconds to make a good first impression, Behaviour and Information Technology, 25(2): 115-126
[MacCormack et al. 2003]
MacCormack A, Kemerer CF, Cusumano M, Crandall B (2003). Trade-offs between productivity and
quality in selecting software development practices. Software, IEEE, 20(5), 78-85
[McCoy et al. 2007]
McCoy S, Everard A, Polak P, Galletta DF (2007) The effects of online advertising. Communications
of the ACM 50(3): 84-88
[Picard 2003]
Picard R (2003) Affective computing: challenges. International Journal of Human-Computer Studies
59(1): 55-64
[Picard et al. 2004]
Picard R, Papert S, Bender W, Blumberg B, Breazeal C, Cavallo D, Strohecker C (2004) Affective
learning a manifesto. BT Technology Journal, 22(4): 253269
[Ren et al. 2011]
Ren Z, Meng J, Yuan J, Zhang Z (2011) Robust hand gesture recognition with kinect sensor. In: Proc
19th ACM international conference on Multimedia, pp 759-760
[Sheng et al. 2010]
Sheng Z, Zhu-ying L, Wan-xin D (2010) The model of e-learning based on affective computing. In:
Proc 3rd IEEE International Conference on Advanced Computer Theory and Engineering (ICACTE),
[Szwoch M 2013]
Szwoch M (2013) FEEDB: a Multimodal Database of Facial Expressions and Emotions In: Proc 6th
International Conference on Human Systems Interaction, pp 524-531
[Szwoch W 2013]
Szwoch W (2013) Using Physiological Signals for Emotion. In: Proc 6th International Conference on
Human Systems Interaction, pp 556-561
[Wróbel 2013]
Wróbel MR (2013) Emotions in the software development . In: Proc 6th International Conference on
Human Systems Interaction, pp 518-523
[Zeng et al. 2009]
Zeng Z, Pantic M, Roisman G, Huang T (2009) A survey of affect recognition methods: Audio, visual,
and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence,
... On the other hand, we have seen in this paper that in fMRI applications, the inclusion of personal characteristic data improved the classification accuracy. Thus, for potential real world applications 82 , it would not be unreasonable to expect significantly higher accuracies when EEG measurements are combined with other external signals and video data that captures facial expressions of the subjects 83,84 . Indeed, the DEAP recordings included facial recordings for some of the subjects, but they were not used in this work. ...
... Another emerging area where EEG based emotion recognition is useful is Software Development Process Improvement 88 . In addition, E-healthcare 89 , E-education, Website Customization and Enhanced Video Game Development are all areas where EEG based methods, real-time or otherwise have been shown to be potentially useful 82 . An overview of emotion recognition applications based on EEG Brain Computer Interface systems can be found in 90 . ...
Full-text available
Conventional Vector Autoregressive (VAR) modelling methods applied to high dimensional neural time series data result in noisy solutions that are dense or have a large number of spurious coefficients. This reduces the speed and accuracy of auxiliary computations downstream and inflates the time required to compute functional connectivity networks by a factor that is at least inversely proportional to the true network density. As these noisy solutions have distorted coefficients, thresholding them as per some criterion, statistical or otherwise, does not alleviate the problem. Thus obtaining a sparse representation of such data is important since it provides an efficient representation of the data and facilitates its further analysis. We propose a fast Sparse Vector Autoregressive Greedy Search (SVARGS) method that works well for high dimensional data, even when the number of time points is relatively low, by incorporating only statistically significant coefficients. In numerical experiments, our methods show high accuracy in recovering the true sparse model. The relative absence of spurious coefficients permits accurate, stable and fast evaluation of derived quantities such as power spectrum, coherence and Granger causality. Consequently, sparse functional connectivity networks can be computed, in a reasonable time, from data comprising tens of thousands of channels/voxels. This enables a much higher resolution analysis of functional connectivity patterns and community structures in such large networks than is possible using existing time series methods. We apply our method to EEG data where computed network measures and community structures are used to distinguish emotional states as well as to ADHD fMRI data where it is used to distinguish children with ADHD from typically developing children.
... • Their voices are sad, funny, etc. IADS (International Affective Digitalized Sounds) public dataset, which aims to estimate emotion according to hearing sense by labeling it according to its condition, was created [26]. In auditory stimuli, sounds are generally applied to stimulate the person's emotions by affecting the sensation. ...
... The ER applications should be able to predict the emotional state of a person based on a particular type of input data. ER systems play a major role in marketing, and advertising (Kołakowska et al., 2014;Liu et al., 2011), where the sales could be improved by understanding the customers' feelings towards a specific product or service (Kang & Park, 2014). For example, considering call centers environment; studying the emotions in speech signals for customers help to detect angry customers in order to improve the services as researchers in (Petrushin, 1999(Petrushin, , 2000 indicate. ...
Full-text available
Emotion analysis is divided into emotion detection, where the system detects if there is an emotional state, and emotion recognition where the system identifies the label of the emotion. In this paper, we provide a multimodal system for emotion detection and recognition using Arabic dataset. We evaluated the performance of both audio and visual data as a unimodal system, then, we exposed the impact of integrating the information sources into one model. We examined the effect of gender identification on the performance. Our results show that identifying speaker’s gender beforehand increases the performance of emotion recognition especially for the models that rely on audio data. Comparing the audio-based system with the visual-based system demonstrates that each model performs better for a specific emotional label. 70% of the angry labels were predicted correctly using the audio model while this percentage was less using the visual model (63%). The accuracy obtained for the surprise class was (40.6%) using the audio model while it was (56.2%) using the visual model. The combination of both modalities improves accuracy. Our final result for the multimodal system was (75%) for the emotion detection task and (60.11%) for emotion recognition task and these results are among the top results achieved in this field and the first which focus on Arabic content. Specifically, the novelty of this work is expressed by exploiting deep learning and multimodal models in emotion analysis and applying it on a natural audio and video dataset for Arabic speaking persons.
... It relates to issues in recognizing and reacting to emotions, as well as influencing emotional states. The use of affective computing in applications in numerous aspects of our lives, such as education (Tsoulouhas, Georgiou and Karakos, 2011;Landowska, 2014), entertainment (Landowska and Wróbel, 2015;Szwoch and Szwoch, 2015), software engineering (Wróbel, 2013), healthcare (Tivatansakul and Ohkura, 2013), and other fields (Kołakowska et al., 2014), is becoming more common. ...
Conference Paper
Full-text available
The topic of analysing human emotions is a complex one that has received considerable attention in recent years. It's essential to identify the ties between seemingly unrelated subjects and domains like computer science and psychology. As a result, it has a lot of potential for improving human-computer interaction in a range of domains, such as education, healthcare, and economy. Physiological and emotional monitoring, particularly in the areas of stress and anxiety monitoring, is of interest to researchers. There are available emotion recognition solutions, which can classify and output basic emotional states based on facial recognition and it can be used in combination with monitoring of physiological functions to get insights about how the physiological functions of the user increase or decrease during specific emotional states. The main purpose of this work is to analyse the available emotional recognition solutions based on the face of the user and to find the hardware and software solution that is financially available, it can be used in many areas and has the highest possible accuracy in the classification process. It is also important to modify the code, so the classified emotions can be transferred to a database table and the if needed, the taken image by the emotion recognition software is sent further to a storage on a server using file transfer protocol (FTP).
... The ability to automatically recognise human emotions is an interesting -yet challenging -issue spreading accross several fields, including, but not limited to: human-computer interaction (HCI), healthcare, education, and multimedia experience [10] [11]. As a matter of fact, there has been a growing interest in automatic emotion recognition over the last few years as emotions not only play an important role in human relationships, but also in how they interact with computers and applications. ...
Conference Paper
Full-text available
Emotions, and consequently facial expressions, play an essential role in communication - and thus in everyday life. With the increase of human-machine interactions, and more especially of multimedia applications, automatic recognition of facial expressions has emerged as a challenging task, particularly under naturalistic conditions. In the present work, a benchmark is firstly conducted using four open source deep learning solutions on four labeled image datasets. Thanks to an exhaustive analysis based on two distinct, yet complementary approaches, we show how the four models performed depending on the studied emotions. Furthermore, we present a novel metric based on the Euclidean distance between two given emotions (i.e., ground truth and predicted) to better measure the performance of said models in the context of interactive media, where human sensibility needs to be taken into consideration.
... Massive investigations worked on a specific reenactment for capturing emotions as discussed in Ekman et al. [22] an open-source EVG (Emotion Evoking Game) and a first and formative comparison strange end result ordinary variations were determined by comparison and facial expressions of surprise, joy, and disappointment been. There is a lot of research on recovery that has been used in focusing emotions in deferent approaches [18], [23], [24]. In addition to the emotion, the evaluation of peripheral units is a trend phase that has led to extensive discussions, countless findings focused on a unique type of peripheral evaluation. ...
Full-text available
The study explored the employment of associate in accessible eye tracer with keyboard and mouse input devices for video games. An interactive game has been developed using unity with multiple balls objects and by hitting they could collect more point for each player. It has been used different techniques to hit the balls using mouse, keyboard, and mixed. Eye tracker input has been help to increase the performance of collected the player points. The research explains how the eye tacking techniques can be used in widely in video game and it is very interactive. Finally, we examine the use of visual observation in relevancy the keyboard and mouse input control and show the difference. Our results indicate that the employment of a watch huntsman will increase the immersion of a computer game and considerably improve the video game technology.
... Emotion recognition (ER) is a challenging problem since the expressions linked to human emotions are extremely diverse in nature across individuals and cultures. It has been extensively researched in various fields such as neuroscience, psychology, cognitive science and computer science, leading to the advancement of a wide range of applications in, e.g., health care (e.g., assessment of anger, fatigue, depression and pain), robotics (human-machine interaction), driver assistance (assessment of a driver's state), etc [11]. ER can be formulated as the problem of categorical model or dimensional model of emotions. ...
Full-text available
Multimodal emotion recognition has recently gained much attention since it can leverage diverse and complementary relationships over multiple modalities (e.g., audio, visual, biosignals, etc.), and can provide some robustness to noisy modalities. Most state-of-the-art methods for audio-visual (A-V) fusion rely on recurrent networks or conventional attention mechanisms that do not effectively leverage the complementary nature of A-V modalities. In this paper, we focus on dimensional emotion recognition based on the fusion of facial and vocal modalities extracted from videos. Specifically, we propose a joint cross-attention model that relies on the complementary relationships to extract the salient features across A-V modalities, allowing for accurate prediction of continuous values of valence and arousal. The proposed fusion model efficiently leverages the inter-modal relationships, while reducing the heterogeneity between the features. In particular, it computes the cross-attention weights based on correlation between the combined feature representation and individual modalities. By deploying the combined A-V feature representation into the cross-attention module, the performance of our fusion module improves significantly over the vanilla cross-attention module. Experimental results on validation-set videos from the AffWild2 dataset indicate that our proposed A-V fusion model provides a cost-effective solution that can outperform state-of-the-art approaches. The code is available on GitHub:
Multi-modal speech emotion recognition is a study to predict emotion categories by combining speech data with other types of data, such as video, speech text transcription, body action, or facial expression when speaking, which will involve the fusion of multiple features. Most of the early studies, however, directly spliced multi-modal features in the fusion layer after single-modal modeling, resulting in ignoring the connection between speech and other modal features. As a result, we propose a novel multi-modal speech emotion recognition model based on multi-head attention fusion networks, which employs transcribed text and motion capture (MoCap) data involving facial expression, head rotation, and hand action to supplement speech data and perform emotion recognition. In unimodal, we use a two-layer Transformer’s encoder combination model to extract speech and text features separately, and MoCap is modeled using a deep residual shrinkage network. Simultaneously, We innovated by changing the input of the Transformer encoder to learn the similarities between speech and text, speech and MoCap, and then output text and MoCap features that are more similar to speech features, and finally, predict the emotion category using combined features. In the IEMOCAP dataset, our model outperformed earlier research with a recognition accuracy of 75.6%.
Conference Paper
Full-text available
In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.
Conference Paper
Full-text available
This paper presents the results of a survey on the experience of emotions in the work of software developers. Quantitative analysis revealed information about emotions affecting programmers, their frequency and impact on their performance. The concept of emotional risk to productivity was also presented. It allows to choose the emotional states, which should be avoided. Furthermore, all collected data were analyzed with information about the experience of developers. In addition to most hazardous emotions, those with the most positive impact on productivity were selected.
Full-text available
Recent research in the field of Human Computer Interaction aims at recognizing the user's emotional state in order to provide a smooth interface between humans and computers. This would make life easier and can be used in vast applications involving areas such as education, medicine etc. Human emotions can be recognized by several approaches such as gesture, facial images, physiological signals and neuro imaging methods. Most of the researchers have developed user dependent emotion recognition system and achieved maximum classification rate. Very few researchers have tried to develop a user independent system and obtained lower classification rate. Efficient emotion stimulus method, larger data samples and intelligent signal processing techniques are essential for improving the classification rate of the user independent system. In this paper, we present a review on emotion recognition using physiological signals. The various theories on emotion, emotion recognition methodology and the current advancements in emotion research are discussed in subsequent topics. This would provide an insight on the current state of research and its challenges on emotion recognition using physiological signals, so that research can be advanced to obtain better recognition.
Conference Paper
Recognizing user's emotions is the promising area of research in a field of human-computer interaction. It is possible to recognize emotions using facial expression, audio signals, body poses, gestures etc. but physiological signals are very useful in this field because they are spontaneous and not controllable. In this paper a problem of using physiological signals for emotion recognition is presented. The kinds of physiological signals and sensors are described. The models of emotions, the methods of emotions' elicitation are presented. There is also a brief review of research progress in using physiological signals for emotion recognition presented in literature. It leads to conclusions about challenges and possible future research.
Conference Paper
The paper describes the approach based on using standard input devices, such as keyboard and mouse, as sources of data for the recognition of users' emotional states. A number of systems applying this idea have been presented focusing on three categories of research problems, i.e. collecting and labeling training data, extracting features and training classifiers of emotions. Moreover the advantages and examples of combining standard input devices with other sources of information on human emotions have been also described. Eventually some conclusions from the review have been drawn.
Conference Paper
In this paper a first version of a multimodal FEEDB database of facial expressions and emotions is presented. The database contains labeled RGB-D recordings of people expressing a specific set of expressions that have been recorded using Microsoft Kinect sensor. Such a database can be used for classifier training and testing in face recognition as well as in recognition of facial expressions and human emotions. Also initial experiences on gathering evoked expressions and depth data processing are presented.