Content uploaded by Marko Horvat
Author content
All content in this area was uploaded by Marko Horvat on Oct 25, 2017
Content may be subject to copyright.
A Brief Overview of Affective Multimedia Databases
Marko Horvat
Zagreb University of Applied Sciences
Department of Computer Science and Information Technology
Vrbik 8, 10000 Zagreb, Croatia
marko.horvat@tvz.hr
Abstract. Multimedia documents such as videos,
images, sounds and text inevitable stimulate emotional
responses which can be measured. These documents
are stored in dedicated affective databases along with
descriptive metadata such as emotion and semantics.
The landscape of multimedia databases for elicitation
and estimation of emotions is very diverse. The
databases are continuously employed in many areas
such as psychology, psychiatry, neurosciences and
cognitive sciences in studies of emotional responses,
anxiety, stress, attention, cognitive states and in brain
research. Their data models are also being
increasingly used in computer science for sentiment
analysis, multifaceted search in multimedia retrieval
and automated recognition of emotions. Because of
their growing relevance, it is important to compile a
concise overview of the most important multimedia
databases for stimulation and estimation of emotions.
The aim of the paper is to help domain experts to find
more easily the optimal database for their research,
and others to quickly familiarize themselves with this
area. The overview lists 24 most recent and frequently
used affective multimedia databases, which jointly
contain 126,805 emotionally-annotated multimedia
documents, and describes their quintessential
properties.
Keywords. affective computing, multimedia,
databases, emotion stimuli, emotion estimation
1 Introduction
Even though not immediately apparent all multimedia
documents provoke emotional reactions of different
intensities and polarities (Coan & Allen, 2007).
Human-computer interfaces allow users to observe
pictures, video clips, generated graphics, read text or
listen to sounds, music and human voices which all,
deliberately or inadvertently, modulate their emotional
states (Brave & Nass, 2003). This spontaneous
cognitive process has many practical applications in
cognitive sciences, psychology and neuroscience
(Frantzidis et al., 2010). Affective multimedia is also
very important for various computer science domains
such as affective computing and human-computer
interaction (Palm & Glodek, 2013). Combined with
sufficiently immersive and unobtrusive visualization
hardware such as Head Mounted Display (HMD) or
high-resolution television sets in low-interference
ambient affective multimedia databases provide a
simple, low-cost and efficient means to scientifically
study emotional impact (Villani & Riva, 2012).
Overall, the scope of emotion-related research is
growing and, accordingly, the importance of these
databases is steadily increasing.
Multimedia documents with annotated semantic
and emotion content are stored in affective multimedia
databases. Apart from digital objects these databases
contain meta-data about their high-level semantics and
statistically expected emotion that will be induced in a
subject when exposed to a multimedia document. The
semantics is annotated manually by researchers and
emotions are estimated in controlled experiments with
human subjects.
The paper provides a short overview of the most
important contemporary affective multimedia
databases containing video, general pictures, pictures
of faces, audio and text. It is impossible to list all
databases in a short format since new ones are
continuously being developed. Further, many are either
small or not publicly available and constructed for
specific experiments.
2 Properties of affective multimedia
databases
Contemporary affective multimedia databases are not
relational databases or complex structures for massive
storage of multimodal data. In fact, they are simple
repositories of audio-visual multimedia documents
such as pictures, sounds, text, videos etc. with
described general semantics and emotion content. Two
distinct features differentiate affective multimedia
databases from other multimedia repositories: 1)
purpose of multimedia documents and 2) emotion
representation of multimedia documents. Multimedia
documents in affective multimedia databases are
explicitly aimed at inducing or stimulating emotions in
exposed subjects. As such they are usually referred to
as stimuli. All multimedia documents (i.e. stimuli) in
affective multimedia databases have specified
semantic and emotional content. Sometimes they are
accompanied with specific meta-data such as elicited
psychological or neurological signals. Importantly,
these databases are still very difficult to build, use and
maintain (Horvat, Popović & Ćosić, 2013).
Affective multimedia databases are created by
different groups of researchers and usually shared
freely for scientific and educational purposes (Bradley
& Lang, 2000). The databases are standardized which
allows them to be used in a controllable and predictable
manner (Horvat, Bogunović & Ćosić, 2014). With
affective multimedia databases the process of emotion
stimulation is not stochastic, unpredictable or singular
but articulated, controlled and comparable to the
scientifically established practices. An important
consequence of the standardization is that the emotion
elicitation effects can be measured, replicated and
validated by different research teams. Before
standardized affective multimedia databases have been
developed researchers had to create unique stimuli
sequences for each emotion experiment. Stimuli used
in one laboratory were rarely used by other
laboratories. Attempts at recreating the same
experimental materials from descriptions in the
literature were time-consuming, difficult and prone to
errors. Therefore, the development of affective
multimedia databases represents a significant
improvement in the study of emotions, behaviour and
cognition.
The most cited affective repository for emotion
elicitation is The Pictures of Facial Affect (POFA)
(Ekman & Friesen, 1975). The Nencki Affective
Picture System (NAPS) (Marchewka et al., 2014)
together with its domain expansions The Nencki
Affective Picture System discrete emotional categories
(NAPS BE) (Riegel et al., 2016) and The Erotic subset
for the Nencki Affective Picture System (NAPS ERO)
(Wierzba et al., 2015) are the newest large general
repositories. Examples of typical stimuli from some of
the datasets are shown in Fig. 1.
Figure 1. Exemplar visual stimuli from four
repositories.
From top to bottom and left to right: NAPS
(Marchewka et al., 2014), The International Affective
Picture System (IAPS) (Lang, Bradley & Cuthbert,
2008), The Military Affective Picture System (MAPS)
(Goodman, Katz & Dretsch, 2016), The NimStim Face
Stimulus Set (Tottenham et al., 2009).
The following sections describe in more detail
semantic and emotional models used in contemporary
affective multimedia databases.
2.1 Semantic models
Affective multimedia databases have very simple
semantic models. The stimuli are described with
mutually unrelated keywords from unsupervised
glossaries. Most often only a single keyword is used to
describe a document. Moreover, lexical variations and
synonyms are often used for description of similar
concepts. Some databases organize documents in
several semantic categories such as “people”,
“objects”, “landscape”, “faces” etc. However, semantic
relations between different concepts, and documents
which they describe, are left broadly undefined. For
example, in the IAPS a picture portraying an attack dog
can be tagged as “dog”, “canine” and “hound”,
“attack”, “attackdog” and “attack_dog”. Pictures
displaying people are described with singulars and
plurals such as “man” and “men” or “woman” and
“women”. Natural language processing methods and
more sophisticated knowledge representation schemes
are necessary to improve information performance
from affective multimedia databases (Horvat, Vuković
& Car, 2016). Document retrieval is possible only with
lexical relatedness measures since there are no criteria
to calculate semantic similarity between concepts in
query and document metadata descriptions (Horvat,
Vuković & Car, 2016). More expressive and formal
semantic models are not possible without modification
of database multimedia descriptors and introduction of
appropriate knowledge structures (Horvat, Bogunović
& Ćosić, 2014; Horvat et al., 2009). As was already
experimentally shown, the inadequate semantic
descriptors result in three negative effects which impair
stimuli retrieval: 1) low recall, 2) low precision and
high recall or 3) vocabulary mismatch (Horvat,
Bogunović & Ćosić, 2014; Horvat, Vuković & Car,
2016).
2.2 Emotion models
Documents in affective multimedia databases are
described with at least one of the two emotion models:
categorical and dimensional (Peter & Herbon, 2006).
The dimensional model, which is also called
Circumplex model of affect (Posner, Russell &
Peterson, 2005) or Pleasure Arousal Dominance model
(PAD) (Mehrabian, 1996), is founded on theories of
emotion which propose that affective meaning can be
well characterized by a small number of dimensions.
Dimensions are chosen on their ability to statistically
characterize subjective emotional ratings with the least
number of dimensions possible (Bradley & Lang,
1994). These dimensions generally include one bipolar
or two unipolar dimensions that represent positivity
and negativity and have been labelled in various ways,
such as valence or pleasure. Moreover, usually
included is a dimension that captures intensity, arousal,
or energy level. In computer models these dimensions
are described with two orthogonal vectors called
valence and arousal which form a two-dimensional
cartesian space. The length of the vectors, i.e.
dimensional emotion values or normative ratings, are
real numbers between 1.0 and 9.0. Such model is
simple and easy to represent in digital systems.
In contrast to the dimensional theories, categorical
theories claim that the dimensional models,
particularly those using only two dimensions, do not
accurately reflect the neural systems underlying
emotional responses. Instead, supporters of these
theories propose that there are many emotions that are
universal across cultures and have an evolutionary and
biological basis (Ekman, 1992). Which discrete
emotions are included in these theories is a point of
contention, as is the choice of which dimensions to
include in the dimensional models. Most supporters of
discrete emotion theories agree that six primary
emotions exist: happiness, sadness, surprise, anger,
fear and disgust. Basic emotions can be represented as
areas inside the valence-arousal space. Their exact
shape and location are individually dependent and have
a developmental trajectory throughout a person’s
lifetime (Posner, Russell & Peterson, 2005).
Dimensional and categorical theories of affect can both
effectively describe emotion in digital systems but are
not mutually exclusive. Some repositories already
incorporate both theories of emotion – per example
(Riegel et al., 2016). Annotations according to both
theories are useful because they provide a more
complete characterization of stimuli affect.
It has been experimentally proven that visual
stimuli from the IAPS produce different responses in
skin conductance, startle reflex, breathing and heart
rate depending (Bradley et al., 2001a; Bradley et al.,
2001b; Kukolja et al., 2014). Relationship of EEG
signals to emotion phenomena is being intensively
investigated (Wu et al., 2010). Recently new open-
access toolboxes are being developed to simplify deep
analysis of physiological time-series such as ECG and
EEG (Jovic et al., 2016).
3 Catalogue of affective multimedia
databases
The landscape of affective multimedia databases is
very diverse. Mainly the databases are pictorial and
small. A substantial number of databases are used
rarely while some larger databases are employed
frequently. To avoid negative effects of habituation
emotion should be induced with visually and auditory
different stimuli, but with the same or compatible
semantics (Coan & Allen, 2007). Because of this
reason researchers often employ the largest affective
multimedia databases which are adequately diverse.
Smaller databases with fewer different stimuli are
employed only for small-scale or domain research
where the lack of stimuli variety is less important.
The catalogue of affective multimedia databases is
in Table 1. Apart from the POFA (Ekman & Friesen,
1975), the most important affective picture repository
is The International Affective Picture System (IAPS)
(Lang, Bradley & Cuthbert, 2008). Other significant
databases are: The Geneva affective picture database
(GAPED) (Dan-Glauser & Scherer, 2011), NAPS
(Marchewka et al., 2014), NAPS BE (Riegel et al.,
2016), NAPS ERO (Wierzba et al., 2015), as well as
DIsgust-RelaTed-Images (DIRTI) database
(Haberkamp et al., 2017), Set of Fear Inducing Pictures
(SFIP) (Michałowski et al., 2017), MAPS (Goodman,
Katz & Dretsch, 2016), Besançon Affective Picture
Set-Adolescents (BAPS-Ado) (Szymanska et al.,
2015), LIRIS-ACCEDE (Baveye et al., 2015), Geneva
faces and voices (GEFAV) database (Ferdenzi et al.,
2015), Child Affective Facial Expression Set (CAFE)
(LoBue & Thrasher, 2014), Affectiva-MIT Facial
Expression Dataset (AM-FED) (McDuff et al., 2013),
Emotional Movie Database (EMDB) (Carvalho et al.,
2012), DEAP: A Database for Emotion Analysis using
Physiological Signals (Koelstra et al., 2012), NIMH
Child Emotional Faces Picture Set (NIMH-ChEFS)
(Egger et al., 2011), Radboud Faces Database (RaFD)
(Langner et al., 2010), NimStim Face Stimulus Set
(Tottenham et al., 2009), CAS-PEAL Large-Scale
Chinese Face Database (Gao et al., 2008), International
Affective Digitized Sounds (IADS) (Bradley & Lang,
2007a), Affective Norms for English Texts (ANET)
(Bradley & Lang, 2007b), Karolinska Directed
Emotional Faces (KDEF) (Lundqvist, Flykt & Öhman,
1998), Japanese Female Facial Expression (JAFFE)
Database (Lyons et al., 1998) and, finally, Affective
Norms for English Words (ANEW) (Bradley & Lang,
1999).
Altogether, there are 3 video databases (VID), 7
picture databases (PIC), 8 picture face databases
(FAC), 1 video and face database (FAC VID), 1 video
and sound face database (FAC VID SND), 2 with text
(TXT) and 1 database only with sounds (SND). The
stimuli are annotated with the discrete emotion model
(DIS) or with the dimensional model (DIM), or both
(DIM DIS). IAPS and IADS stimuli were originally
annotated only with valence and arousal values but
later emotion norms were also added (DIS*).
Table 1. The list of the most often used collections of audio-visual stimuli sorted from the newest to the oldest.
Name
Modality
Emotion
Stimuli No.
Published
DIRTI
PIC
DIM DIS
300
2017
NAPS/NAPS BE
PIC
DIM DIS
1,356
2014-2016
SFIP
PIC
DIM
1,400
2016
NAPS ERO
PIC
DIM
200
2015
MAPS
PIC
DIM
240
2015
BAPS-Ado
PIC
DIM DIS
93
2015
LIRIS-ACCEDE
VID
DIM
9,800
2015
GEFAV
FAC VID SND
DIS
111
2014
CAFÉ
FAC
DIS
1,192
2014
AM-FED
FAC VID
DIS
242
2013
EMDB
VID
DIM
52
2012
DEAP
VID
DIM
120
2012
GAPED
PIC
DIS
730
2011
NIMH-ChEFS
FAC
DIS
482
2011
RaFD
FAC
DIS
536
2010
NimStim
FAC
DIS
672
2009
CAS-PEAL
FAC
DIS
99,594
2008
IAPS
PAC
DIM DIS*
1,182
1997-2008
IADS
SND
DIM DIS*
111
1999-2007
ANET
TXT
DIM
60
1999-2007
KDEF
FAC
DIS
4,900
1998
JAFFE
FAC
DIS
213
1998
ANEW
TXT
DIM
3,109
1999
POFA
FAC
DIS
110
1976-1993
Some datasets had multiple revisions in the
designated time frame. Annotations are explained in
the text above. The newest repository is DIRTI being
published in 2017, the largest is CAS-PEAL with
99,594 pictures of 1,040 individuals, and the most
expressively annotated is NAPS/NAPS BE/NAPS
ERO as single linked corpora. As can be seen facial
expression databases, or face databases, are the most
numerous modality among affective multimedia
databases. they are employed in emotion elicitation,
these databases are also often used in computer vision
for face recognition and face detection. For a more
detailed overview of these databases consult (Gao et
al., 2008). The presented catalogue is not exhaustive,
nevertheless it includes the most frequently referenced
databases. Additional lists of affective multimedia
databases which may be used for emotion estimation
can be found in (Baveye et al., 2015; Zeng et al., 2009;
Gao et al., 2008).
4 Conclusion
In this article we have presented a compact catalogue
of affective multimedia databases. In total, 24
repositories were listed containing 126,805 documents
of different modalities. The principal purpose of these
databases is to provoke emotional reactions in a
predictable manner. Their multimedia content is
described with the dimensional or discrete emotion
model, or sometimes both.
The wider field of affective multimedia databases
research provides several interesting directions for
investigation and development. Most importantly,
these databases should be interlinked and become more
practical through multifaceted queries combining
semantics, emotion and other available metadata.
Construction of new repositories must be facilitated, as
well as creation of personalized stimuli sequences.
Introduction of structured data sources has already
shown promising results in document retrieval quality
(Horvat, Grbin & Gledec, 2013). Knowledge discovery
of integrated data could provide statistically significant
indicators of hidden relationships between semantics
and emotion (Horvat, Popović & Ćosić, 2012).
Looking even further into the future, truly multimodal
corpora containing haptic and olfactory stimuli are
likely to be developed. Eventually, interactive virtual
reality stimuli will become common-place.
References
Baveye, Y., Dellandrea, E., Chamaret, C., & Chen, L.
(2015). Liris-accede: A video database for
affective content analysis. IEEE Transactions on
Affective Computing, 6(1), 43-55.
Bradley, M. M., & Lang, P. J. (1994). Measuring
emotion: the self-assessment manikin and the
semantic differential. Journal of behavior therapy
and experimental psychiatry, 25(1), 49-59.
Bradley, M. M., & Lang, P. J. (1999). Affective norms
for English words (ANEW): Instruction manual
and affective ratings (pp. 1-45). Technical report
C-1, the center for research in psychophysiology,
University of Florida.
Bradley, M. M., & Lang, P. J. (2000). Measuring
emotion: Behavior, feeling, and
physiology. Cognitive neuroscience of
emotion, 25, 49-59.
Bradley, M. M., & Lang, P. J. (2007). The
International Affective Digitized Sounds (IADS-
2): Affective ratings of sounds and instruction
manual. University of Florida, Gainesville, FL,
Tech. Rep. B-3.
Bradley, M. M., & Lang, P. J. (2007). Affective
Norms for English Text (ANET): Affective ratings
of text and instruction manual. Techical Report.
D-1, University of Florida, Gainesville, FL.
Bradley, M. M., Codispoti, M., Cuthbert, B. N., &
Lang, P. J. (2001). Emotion and motivation I:
defensive and appetitive reactions in picture
processing. Emotion, 1(3), 276.
Bradley, M. M., Codispoti, M., Sabatinelli, D., &
Lang, P. J. (2001). Emotion and motivation II: sex
differences in picture processing. Emotion, 1(3),
300.
Brave, S., & Nass, C. (2003). Emotion in human–
computer interaction. Human-Computer
Interaction, 53.
Carvalho, S., Leite, J., Galdo-Álvarez, S., &
Gonçalves, O. F. (2012). The emotional movie
database (EMDB): A self-report and
psychophysiological study. Applied
psychophysiology and biofeedback, 37(4), 279-
294.
Coan, J. A., & Allen, J. J. (2007). Handbook of
emotion elicitation and assessment. Oxford
university press.
Dan-Glauser, E. S., & Scherer, K. R. (2011). The
Geneva affective picture database (GAPED): a
new 730-picture database focusing on valence and
normative significance. Behavior research
methods, 43(2), 468.
Egger, H. L., Pine, D. S., Nelson, E., Leibenluft, E.,
Ernst, M., Towbin, K. E., & Angold, A. (2011).
The NIMH Child Emotional Faces Picture Set
(NIMH‐ChEFS): a new set of children's facial
emotion stimuli. International Journal of Methods
in Psychiatric Research, 20(3), 145-156.
Ekman, P. (1992). Are there basic emotions?.
Psychological Review, 99, 550–553.
Ekman, P., & Friesen, W. V. (1975). Pictures of
facial affect. Consulting psychologists press.
Ferdenzi, C., Delplanque, S., Mehu-Blantar, I.,
Cabral, K. M. D. P., Felicio, M. D., & Sander, D.
(2015). The Geneva faces and voices (GEFAV)
database. Behavior research methods, 47(4),
1110-1121.
Frantzidis, C. A., Bratsas, C., Papadelis, C. L.,
Konstantinidis, E., Pappas, C., & Bamidis, P. D.
(2010). Toward emotion aware computing: an
integrated approach using multichannel
neurophysiological recordings and affective visual
stimuli. IEEE Transactions on Information
Technology in Biomedicine, 14(3), 589-597.
Gao, W., Cao, B., Shan, S., Chen, X., Zhou, D.,
Zhang, X., & Zhao, D. (2008). The CAS-PEAL
large-scale Chinese face database and baseline
evaluations. IEEE Transactions on Systems, Man,
and Cybernetics-Part A: Systems and
Humans, 38(1), 149-161.
Goodman, A. M., Katz, J. S., & Dretsch, M. N.
(2016). Military Affective Picture System
(MAPS): A new emotion-based stimuli set for
assessing emotional processing in military
populations. Journal of behavior therapy and
experimental psychiatry, 50, 152-161.
Haberkamp, A., Glombiewski, J. A., Schmidt, F., &
Barke, A. (2017). The DIsgust-RelaTed-Images
(DIRTI) database: Validation of a novel
standardized set of disgust pictures. Behaviour
Research and Therapy, 89, 86-94.
Horvat, M., Popović, S., Bogunović, N., & Ćosić, K.
(2009). Tagging multimedia stimuli with
ontologies. In Information & Communication
Technology Electronics & Microelectronics
(MIPRO), 2009 32nd International Convention
on (pp. 203-208). IEEE.
Horvat, M., Popović, S., & Ćosić, K. (2012). Towards
semantic and affective coupling in emotionally
annotated databases. In MIPRO, 2012
Proceedings of the 35th International
Convention (pp. 1003-1008). IEEE.
Horvat, M., Grbin, A., & Gledec, G. (2013). WNtags:
A web-based tool for image labeling and retrieval
with lexical ontologies. Frontiers in artificial
intelligence and applications, 243, 585–594.
Horvat, M., Popović, S., & Ćosić, K. (2013).
Multimedia stimuli databases usage patterns: a
survey report. In Information & Communication
Technology Electronics & Microelectronics
(MIPRO), 2013 36th International Convention
on (pp. 993-997). IEEE.
Horvat, M., Bogunović, N., & Ćosić, K. (2014).
STIMONT: a core ontology for multimedia
stimuli description. Multimedia tools and
applications, 73(3), 1103-1127.
Horvat, M., Vuković, M., & Car, Ž. (2016).
Evaluation of keyword search in affective
multimedia databases. In Transactions on
Computational Collective Intelligence XXI (pp.
50-68). Springer Berlin Heidelberg.
Jovic, A., Kukolja, D., Jozic, K., & Horvat, M. (2016,
May). A web platform for analysis of multivariate
heterogeneous biomedical time-series—A
preliminary report. In Systems, Signals and Image
Processing (IWSSIP), 2016 International
Conference on (pp. 1-4). IEEE.
Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S.,
Yazdani, A., Ebrahimi, T., ... & Patras, I. (2012).
Deap: A database for emotion analysis; using
physiological signals. IEEE Transactions on
Affective Computing, 3(1), 18-31.
Kukolja, D., Popović, S., Horvat, M., Kovač, B., &
Ćosić, K. (2014). Comparative analysis of
emotion estimation methods based on
physiological measurements for real-time
applications. International journal of human-
computer studies, 72(10), 717-727.
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008).
International affective picture system (IAPS):
Affective ratings of pictures and instruction
manual. Technical report A-8.
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.
H., Hawk, S. T., & van Knippenberg, A. (2010).
Presentation and validation of the Radboud Faces
Database. Cognition and emotion, 24(8), 1377-
1388.
LoBue, V., & Thrasher, C. (2014). The Child
Affective Facial Expression (CAFE) set: Validity
and reliability from untrained adults. Frontiers in
psychology, 5.
Lundqvist, D., Flykt, A., & Öhman, A. (1998). The
Karolinska directed emotional faces (KDEF). CD
ROM from Department of Clinical Neuroscience,
Psychology section, Karolinska Institutet, 91-630.
Lyons, M., Akamatsu, S., Kamachi, M., & Gyoba, J.
(1998, April). Coding facial expressions with
gabor wavelets. In Automatic Face and Gesture
Recognition, 1998. Proceedings. Third IEEE
International Conference on (pp. 200-205). IEEE.
Marchewka, A., Żurawski, Ł., Jednoróg, K., &
Grabowska, A. (2014). The Nencki Affective
Picture System (NAPS): Introduction to a novel,
standardized, wide-range, high-quality, realistic
picture database. Behavior research
methods, 46(2), 596-610.
McDuff, D., Kaliouby, R., Senechal, T., Amr, M.,
Cohn, J., & Picard, R. (2013). Affectiva-mit facial
expression dataset (am-fed): Naturalistic and
spontaneous facial expressions collected.
In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition
Workshops (pp. 881-888).
Mehrabian, A. (1996). Pleasure-arousal-dominance: A
general framework for describing and measuring
individual differences in temperament. Current
Psychology, 14(4), 261-292.
Michałowski, J. M., Droździel, D., Matuszewski, J.,
Koziejowski, W., Jednoróg, K., & Marchewka, A.
(2016). The Set of Fear Inducing Pictures (SFIP):
Development and validation in fearful and
nonfearful individuals. Behavior Research
Methods, 1-13.
Palm, G., & Glodek, M. (2013). Towards emotion
recognition in human computer interaction.
In Neural nets and surroundings (pp. 323-336).
Springer Berlin Heidelberg.
Peter, C., & Herbon, A. (2006). Emotion
representation and physiology assignments in
digital systems. Interacting with
Computers, 18(2), 139-170.
Posner, J., Russell, J. A., & Peterson, B. S. (2005).
The circumplex model of affect: An integrative
approach to affective neuroscience, cognitive
development, and psychopathology. Development
and psychopathology, 17(03), 715-734.
Riegel, M., Żurawski, Ł., Wierzba, M., Moslehi, A.,
Klocek, Ł., Horvat, M., ... & Marchewka, A.
(2016). Characterization of the Nencki Affective
Picture System by discrete emotional categories
(NAPS BE). Behavior research methods, 48(2),
600-612.
Szymanska, M., Monnin, J., Noiret, N., Tio, G.,
Galdon, L., Laurent, E., ... & Vulliez-Coady, L.
(2015). The Besançon Affective Picture Set-
Adolescents (the BAPS-Ado): Development and
validation. Psychiatry research, 228(3), 576-584.
Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry,
T., Nurse, M., Hare, T. A., ... & Nelson, C. (2009).
The NimStim set of facial expressions: judgments
from untrained research participants. Psychiatry
research, 168(3), 242-249.
Villani, D., & Riva, G. (2012). Does interactive media
enhance the management of stress? Suggestions
from a controlled study. Cyberpsychology,
Behavior, and Social Networking, 15(1), 24-30.
Wierzba, M., Riegel, M., Pucz, A., Leśniewska, Z.,
Dragan, W. Ł., Gola, M., ... & Marchewka, A.
(2015). Erotic subset for the Nencki Affective
Picture System (NAPS ERO): cross-sexual
comparison study. Frontiers in psychology, 6,
1336.
Wu, D., Courtney, C. G., Lance, B. J., Narayanan, S.
S., Dawson, M. E., Oie, K. S., & Parsons, T. D.
(2010). Optimal arousal identification and
classification for affective computing using
physiological signals: Virtual reality stroop
task. IEEE Transactions on Affective
Computing, 1(2), 109-118.
Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S.
(2009). A survey of affect recognition methods:
Audio, visual, and spontaneous expressions. IEEE
transactions on pattern analysis and machine
intelligence, 31(1), 39-58.