Conference PaperPDF Available

A brief overview of affective multimedia databases

Authors:

Abstract and Figures

Multimedia documents such as videos, images, sounds and text inevitable stimulate emotional responses which can be measured. These documents are stored in dedicated affective databases along with descriptive metadata such as emotion and semantics. The landscape of multimedia databases for elicitation and estimation of emotions is very diverse. The databases are continuously employed in many areas such as psychology, psychiatry, neurosciences and cognitive sciences in studies of emotional responses, anxiety, stress, attention, cognitive states and in brain research. Their data models are also being increasingly used in computer science for sentiment analysis, multifaceted search in multimedia retrieval and automated recognition of emotions. Because of their growing relevance, it is important to compile a concise overview of the most important multimedia databases for stimulation and estimation of emotions. The aim of the paper is to help domain experts to find more easily the optimal database for their research, and others to quickly familiarize themselves with this area. The overview lists 24 most recent and frequently used affective multimedia databases, which jointly contain 126,805 emotionally-annotated multimedia documents, and describes their quintessential properties.
Content may be subject to copyright.
A Brief Overview of Affective Multimedia Databases
Marko Horvat
Zagreb University of Applied Sciences
Department of Computer Science and Information Technology
Vrbik 8, 10000 Zagreb, Croatia
marko.horvat@tvz.hr
Abstract. Multimedia documents such as videos,
images, sounds and text inevitable stimulate emotional
responses which can be measured. These documents
are stored in dedicated affective databases along with
descriptive metadata such as emotion and semantics.
The landscape of multimedia databases for elicitation
and estimation of emotions is very diverse. The
databases are continuously employed in many areas
such as psychology, psychiatry, neurosciences and
cognitive sciences in studies of emotional responses,
anxiety, stress, attention, cognitive states and in brain
research. Their data models are also being
increasingly used in computer science for sentiment
analysis, multifaceted search in multimedia retrieval
and automated recognition of emotions. Because of
their growing relevance, it is important to compile a
concise overview of the most important multimedia
databases for stimulation and estimation of emotions.
The aim of the paper is to help domain experts to find
more easily the optimal database for their research,
and others to quickly familiarize themselves with this
area. The overview lists 24 most recent and frequently
used affective multimedia databases, which jointly
contain 126,805 emotionally-annotated multimedia
documents, and describes their quintessential
properties.
Keywords. affective computing, multimedia,
databases, emotion stimuli, emotion estimation
1 Introduction
Even though not immediately apparent all multimedia
documents provoke emotional reactions of different
intensities and polarities (Coan & Allen, 2007).
Human-computer interfaces allow users to observe
pictures, video clips, generated graphics, read text or
listen to sounds, music and human voices which all,
deliberately or inadvertently, modulate their emotional
states (Brave & Nass, 2003). This spontaneous
cognitive process has many practical applications in
cognitive sciences, psychology and neuroscience
(Frantzidis et al., 2010). Affective multimedia is also
very important for various computer science domains
such as affective computing and human-computer
interaction (Palm & Glodek, 2013). Combined with
sufficiently immersive and unobtrusive visualization
hardware such as Head Mounted Display (HMD) or
high-resolution television sets in low-interference
ambient affective multimedia databases provide a
simple, low-cost and efficient means to scientifically
study emotional impact (Villani & Riva, 2012).
Overall, the scope of emotion-related research is
growing and, accordingly, the importance of these
databases is steadily increasing.
Multimedia documents with annotated semantic
and emotion content are stored in affective multimedia
databases. Apart from digital objects these databases
contain meta-data about their high-level semantics and
statistically expected emotion that will be induced in a
subject when exposed to a multimedia document. The
semantics is annotated manually by researchers and
emotions are estimated in controlled experiments with
human subjects.
The paper provides a short overview of the most
important contemporary affective multimedia
databases containing video, general pictures, pictures
of faces, audio and text. It is impossible to list all
databases in a short format since new ones are
continuously being developed. Further, many are either
small or not publicly available and constructed for
specific experiments.
2 Properties of affective multimedia
databases
Contemporary affective multimedia databases are not
relational databases or complex structures for massive
storage of multimodal data. In fact, they are simple
repositories of audio-visual multimedia documents
such as pictures, sounds, text, videos etc. with
described general semantics and emotion content. Two
distinct features differentiate affective multimedia
databases from other multimedia repositories: 1)
purpose of multimedia documents and 2) emotion
representation of multimedia documents. Multimedia
documents in affective multimedia databases are
explicitly aimed at inducing or stimulating emotions in
exposed subjects. As such they are usually referred to
as stimuli. All multimedia documents (i.e. stimuli) in
affective multimedia databases have specified
semantic and emotional content. Sometimes they are
accompanied with specific meta-data such as elicited
psychological or neurological signals. Importantly,
these databases are still very difficult to build, use and
maintain (Horvat, Popović & Ćosić, 2013).
Affective multimedia databases are created by
different groups of researchers and usually shared
freely for scientific and educational purposes (Bradley
& Lang, 2000). The databases are standardized which
allows them to be used in a controllable and predictable
manner (Horvat, Bogunović & Ćosić, 2014). With
affective multimedia databases the process of emotion
stimulation is not stochastic, unpredictable or singular
but articulated, controlled and comparable to the
scientifically established practices. An important
consequence of the standardization is that the emotion
elicitation effects can be measured, replicated and
validated by different research teams. Before
standardized affective multimedia databases have been
developed researchers had to create unique stimuli
sequences for each emotion experiment. Stimuli used
in one laboratory were rarely used by other
laboratories. Attempts at recreating the same
experimental materials from descriptions in the
literature were time-consuming, difficult and prone to
errors. Therefore, the development of affective
multimedia databases represents a significant
improvement in the study of emotions, behaviour and
cognition.
The most cited affective repository for emotion
elicitation is The Pictures of Facial Affect (POFA)
(Ekman & Friesen, 1975). The Nencki Affective
Picture System (NAPS) (Marchewka et al., 2014)
together with its domain expansions The Nencki
Affective Picture System discrete emotional categories
(NAPS BE) (Riegel et al., 2016) and The Erotic subset
for the Nencki Affective Picture System (NAPS ERO)
(Wierzba et al., 2015) are the newest large general
repositories. Examples of typical stimuli from some of
the datasets are shown in Fig. 1.
Figure 1. Exemplar visual stimuli from four
repositories.
From top to bottom and left to right: NAPS
(Marchewka et al., 2014), The International Affective
Picture System (IAPS) (Lang, Bradley & Cuthbert,
2008), The Military Affective Picture System (MAPS)
(Goodman, Katz & Dretsch, 2016), The NimStim Face
Stimulus Set (Tottenham et al., 2009).
The following sections describe in more detail
semantic and emotional models used in contemporary
affective multimedia databases.
2.1 Semantic models
Affective multimedia databases have very simple
semantic models. The stimuli are described with
mutually unrelated keywords from unsupervised
glossaries. Most often only a single keyword is used to
describe a document. Moreover, lexical variations and
synonyms are often used for description of similar
concepts. Some databases organize documents in
several semantic categories such as “people”,
“objects”, “landscape”, “faces” etc. However, semantic
relations between different concepts, and documents
which they describe, are left broadly undefined. For
example, in the IAPS a picture portraying an attack dog
can be tagged as “dog”, “canine” and “hound”,
“attack”, “attackdog” and “attack_dog”. Pictures
displaying people are described with singulars and
plurals such as “man” and “men” or “woman” and
“women”. Natural language processing methods and
more sophisticated knowledge representation schemes
are necessary to improve information performance
from affective multimedia databases (Horvat, Vuković
& Car, 2016). Document retrieval is possible only with
lexical relatedness measures since there are no criteria
to calculate semantic similarity between concepts in
query and document metadata descriptions (Horvat,
Vuković & Car, 2016). More expressive and formal
semantic models are not possible without modification
of database multimedia descriptors and introduction of
appropriate knowledge structures (Horvat, Bogunović
& Ćosić, 2014; Horvat et al., 2009). As was already
experimentally shown, the inadequate semantic
descriptors result in three negative effects which impair
stimuli retrieval: 1) low recall, 2) low precision and
high recall or 3) vocabulary mismatch (Horvat,
Bogunović & Ćosić, 2014; Horvat, Vuković & Car,
2016).
2.2 Emotion models
Documents in affective multimedia databases are
described with at least one of the two emotion models:
categorical and dimensional (Peter & Herbon, 2006).
The dimensional model, which is also called
Circumplex model of affect (Posner, Russell &
Peterson, 2005) or Pleasure Arousal Dominance model
(PAD) (Mehrabian, 1996), is founded on theories of
emotion which propose that affective meaning can be
well characterized by a small number of dimensions.
Dimensions are chosen on their ability to statistically
characterize subjective emotional ratings with the least
number of dimensions possible (Bradley & Lang,
1994). These dimensions generally include one bipolar
or two unipolar dimensions that represent positivity
and negativity and have been labelled in various ways,
such as valence or pleasure. Moreover, usually
included is a dimension that captures intensity, arousal,
or energy level. In computer models these dimensions
are described with two orthogonal vectors called
valence and arousal which form a two-dimensional
cartesian space. The length of the vectors, i.e.
dimensional emotion values or normative ratings, are
real numbers between 1.0 and 9.0. Such model is
simple and easy to represent in digital systems.
In contrast to the dimensional theories, categorical
theories claim that the dimensional models,
particularly those using only two dimensions, do not
accurately reflect the neural systems underlying
emotional responses. Instead, supporters of these
theories propose that there are many emotions that are
universal across cultures and have an evolutionary and
biological basis (Ekman, 1992). Which discrete
emotions are included in these theories is a point of
contention, as is the choice of which dimensions to
include in the dimensional models. Most supporters of
discrete emotion theories agree that six primary
emotions exist: happiness, sadness, surprise, anger,
fear and disgust. Basic emotions can be represented as
areas inside the valence-arousal space. Their exact
shape and location are individually dependent and have
a developmental trajectory throughout a person’s
lifetime (Posner, Russell & Peterson, 2005).
Dimensional and categorical theories of affect can both
effectively describe emotion in digital systems but are
not mutually exclusive. Some repositories already
incorporate both theories of emotion per example
(Riegel et al., 2016). Annotations according to both
theories are useful because they provide a more
complete characterization of stimuli affect.
It has been experimentally proven that visual
stimuli from the IAPS produce different responses in
skin conductance, startle reflex, breathing and heart
rate depending (Bradley et al., 2001a; Bradley et al.,
2001b; Kukolja et al., 2014). Relationship of EEG
signals to emotion phenomena is being intensively
investigated (Wu et al., 2010). Recently new open-
access toolboxes are being developed to simplify deep
analysis of physiological time-series such as ECG and
EEG (Jovic et al., 2016).
3 Catalogue of affective multimedia
databases
The landscape of affective multimedia databases is
very diverse. Mainly the databases are pictorial and
small. A substantial number of databases are used
rarely while some larger databases are employed
frequently. To avoid negative effects of habituation
emotion should be induced with visually and auditory
different stimuli, but with the same or compatible
semantics (Coan & Allen, 2007). Because of this
reason researchers often employ the largest affective
multimedia databases which are adequately diverse.
Smaller databases with fewer different stimuli are
employed only for small-scale or domain research
where the lack of stimuli variety is less important.
The catalogue of affective multimedia databases is
in Table 1. Apart from the POFA (Ekman & Friesen,
1975), the most important affective picture repository
is The International Affective Picture System (IAPS)
(Lang, Bradley & Cuthbert, 2008). Other significant
databases are: The Geneva affective picture database
(GAPED) (Dan-Glauser & Scherer, 2011), NAPS
(Marchewka et al., 2014), NAPS BE (Riegel et al.,
2016), NAPS ERO (Wierzba et al., 2015), as well as
DIsgust-RelaTed-Images (DIRTI) database
(Haberkamp et al., 2017), Set of Fear Inducing Pictures
(SFIP) (Michałowski et al., 2017), MAPS (Goodman,
Katz & Dretsch, 2016), Besançon Affective Picture
Set-Adolescents (BAPS-Ado) (Szymanska et al.,
2015), LIRIS-ACCEDE (Baveye et al., 2015), Geneva
faces and voices (GEFAV) database (Ferdenzi et al.,
2015), Child Affective Facial Expression Set (CAFE)
(LoBue & Thrasher, 2014), Affectiva-MIT Facial
Expression Dataset (AM-FED) (McDuff et al., 2013),
Emotional Movie Database (EMDB) (Carvalho et al.,
2012), DEAP: A Database for Emotion Analysis using
Physiological Signals (Koelstra et al., 2012), NIMH
Child Emotional Faces Picture Set (NIMH-ChEFS)
(Egger et al., 2011), Radboud Faces Database (RaFD)
(Langner et al., 2010), NimStim Face Stimulus Set
(Tottenham et al., 2009), CAS-PEAL Large-Scale
Chinese Face Database (Gao et al., 2008), International
Affective Digitized Sounds (IADS) (Bradley & Lang,
2007a), Affective Norms for English Texts (ANET)
(Bradley & Lang, 2007b), Karolinska Directed
Emotional Faces (KDEF) (Lundqvist, Flykt & Öhman,
1998), Japanese Female Facial Expression (JAFFE)
Database (Lyons et al., 1998) and, finally, Affective
Norms for English Words (ANEW) (Bradley & Lang,
1999).
Altogether, there are 3 video databases (VID), 7
picture databases (PIC), 8 picture face databases
(FAC), 1 video and face database (FAC VID), 1 video
and sound face database (FAC VID SND), 2 with text
(TXT) and 1 database only with sounds (SND). The
stimuli are annotated with the discrete emotion model
(DIS) or with the dimensional model (DIM), or both
(DIM DIS). IAPS and IADS stimuli were originally
annotated only with valence and arousal values but
later emotion norms were also added (DIS*).
Table 1. The list of the most often used collections of audio-visual stimuli sorted from the newest to the oldest.
Name
Modality
Emotion
Stimuli No.
Published
DIRTI
PIC
DIM DIS
300
2017
NAPS/NAPS BE
PIC
DIM DIS
1,356
2014-2016
SFIP
PIC
DIM
1,400
2016
NAPS ERO
PIC
DIM
200
2015
MAPS
PIC
DIM
240
2015
BAPS-Ado
PIC
DIM DIS
93
2015
LIRIS-ACCEDE
VID
DIM
9,800
2015
GEFAV
FAC VID SND
DIS
111
2014
CAFÉ
FAC
DIS
1,192
2014
AM-FED
FAC VID
DIS
242
2013
EMDB
VID
DIM
52
2012
DEAP
VID
DIM
120
2012
GAPED
PIC
DIS
730
2011
NIMH-ChEFS
FAC
DIS
482
2011
RaFD
FAC
DIS
536
2010
NimStim
FAC
DIS
672
2009
CAS-PEAL
FAC
DIS
99,594
2008
IAPS
PAC
DIM DIS*
1,182
1997-2008
IADS
SND
DIM DIS*
111
1999-2007
ANET
TXT
DIM
60
1999-2007
KDEF
FAC
DIS
4,900
1998
JAFFE
FAC
DIS
213
1998
ANEW
TXT
DIM
3,109
1999
POFA
FAC
DIS
110
1976-1993
Some datasets had multiple revisions in the
designated time frame. Annotations are explained in
the text above. The newest repository is DIRTI being
published in 2017, the largest is CAS-PEAL with
99,594 pictures of 1,040 individuals, and the most
expressively annotated is NAPS/NAPS BE/NAPS
ERO as single linked corpora. As can be seen facial
expression databases, or face databases, are the most
numerous modality among affective multimedia
databases. they are employed in emotion elicitation,
these databases are also often used in computer vision
for face recognition and face detection. For a more
detailed overview of these databases consult (Gao et
al., 2008). The presented catalogue is not exhaustive,
nevertheless it includes the most frequently referenced
databases. Additional lists of affective multimedia
databases which may be used for emotion estimation
can be found in (Baveye et al., 2015; Zeng et al., 2009;
Gao et al., 2008).
4 Conclusion
In this article we have presented a compact catalogue
of affective multimedia databases. In total, 24
repositories were listed containing 126,805 documents
of different modalities. The principal purpose of these
databases is to provoke emotional reactions in a
predictable manner. Their multimedia content is
described with the dimensional or discrete emotion
model, or sometimes both.
The wider field of affective multimedia databases
research provides several interesting directions for
investigation and development. Most importantly,
these databases should be interlinked and become more
practical through multifaceted queries combining
semantics, emotion and other available metadata.
Construction of new repositories must be facilitated, as
well as creation of personalized stimuli sequences.
Introduction of structured data sources has already
shown promising results in document retrieval quality
(Horvat, Grbin & Gledec, 2013). Knowledge discovery
of integrated data could provide statistically significant
indicators of hidden relationships between semantics
and emotion (Horvat, Popović & Ćosić, 2012).
Looking even further into the future, truly multimodal
corpora containing haptic and olfactory stimuli are
likely to be developed. Eventually, interactive virtual
reality stimuli will become common-place.
References
Baveye, Y., Dellandrea, E., Chamaret, C., & Chen, L.
(2015). Liris-accede: A video database for
affective content analysis. IEEE Transactions on
Affective Computing, 6(1), 43-55.
Bradley, M. M., & Lang, P. J. (1994). Measuring
emotion: the self-assessment manikin and the
semantic differential. Journal of behavior therapy
and experimental psychiatry, 25(1), 49-59.
Bradley, M. M., & Lang, P. J. (1999). Affective norms
for English words (ANEW): Instruction manual
and affective ratings (pp. 1-45). Technical report
C-1, the center for research in psychophysiology,
University of Florida.
Bradley, M. M., & Lang, P. J. (2000). Measuring
emotion: Behavior, feeling, and
physiology. Cognitive neuroscience of
emotion, 25, 49-59.
Bradley, M. M., & Lang, P. J. (2007). The
International Affective Digitized Sounds (IADS-
2): Affective ratings of sounds and instruction
manual. University of Florida, Gainesville, FL,
Tech. Rep. B-3.
Bradley, M. M., & Lang, P. J. (2007). Affective
Norms for English Text (ANET): Affective ratings
of text and instruction manual. Techical Report.
D-1, University of Florida, Gainesville, FL.
Bradley, M. M., Codispoti, M., Cuthbert, B. N., &
Lang, P. J. (2001). Emotion and motivation I:
defensive and appetitive reactions in picture
processing. Emotion, 1(3), 276.
Bradley, M. M., Codispoti, M., Sabatinelli, D., &
Lang, P. J. (2001). Emotion and motivation II: sex
differences in picture processing. Emotion, 1(3),
300.
Brave, S., & Nass, C. (2003). Emotion in human
computer interaction. Human-Computer
Interaction, 53.
Carvalho, S., Leite, J., Galdo-Álvarez, S., &
Gonçalves, O. F. (2012). The emotional movie
database (EMDB): A self-report and
psychophysiological study. Applied
psychophysiology and biofeedback, 37(4), 279-
294.
Coan, J. A., & Allen, J. J. (2007). Handbook of
emotion elicitation and assessment. Oxford
university press.
Dan-Glauser, E. S., & Scherer, K. R. (2011). The
Geneva affective picture database (GAPED): a
new 730-picture database focusing on valence and
normative significance. Behavior research
methods, 43(2), 468.
Egger, H. L., Pine, D. S., Nelson, E., Leibenluft, E.,
Ernst, M., Towbin, K. E., & Angold, A. (2011).
The NIMH Child Emotional Faces Picture Set
(NIMH‐ChEFS): a new set of children's facial
emotion stimuli. International Journal of Methods
in Psychiatric Research, 20(3), 145-156.
Ekman, P. (1992). Are there basic emotions?.
Psychological Review, 99, 550553.
Ekman, P., & Friesen, W. V. (1975). Pictures of
facial affect. Consulting psychologists press.
Ferdenzi, C., Delplanque, S., Mehu-Blantar, I.,
Cabral, K. M. D. P., Felicio, M. D., & Sander, D.
(2015). The Geneva faces and voices (GEFAV)
database. Behavior research methods, 47(4),
1110-1121.
Frantzidis, C. A., Bratsas, C., Papadelis, C. L.,
Konstantinidis, E., Pappas, C., & Bamidis, P. D.
(2010). Toward emotion aware computing: an
integrated approach using multichannel
neurophysiological recordings and affective visual
stimuli. IEEE Transactions on Information
Technology in Biomedicine, 14(3), 589-597.
Gao, W., Cao, B., Shan, S., Chen, X., Zhou, D.,
Zhang, X., & Zhao, D. (2008). The CAS-PEAL
large-scale Chinese face database and baseline
evaluations. IEEE Transactions on Systems, Man,
and Cybernetics-Part A: Systems and
Humans, 38(1), 149-161.
Goodman, A. M., Katz, J. S., & Dretsch, M. N.
(2016). Military Affective Picture System
(MAPS): A new emotion-based stimuli set for
assessing emotional processing in military
populations. Journal of behavior therapy and
experimental psychiatry, 50, 152-161.
Haberkamp, A., Glombiewski, J. A., Schmidt, F., &
Barke, A. (2017). The DIsgust-RelaTed-Images
(DIRTI) database: Validation of a novel
standardized set of disgust pictures. Behaviour
Research and Therapy, 89, 86-94.
Horvat, M., Popović, S., Bogunović, N., & Ćosić, K.
(2009). Tagging multimedia stimuli with
ontologies. In Information & Communication
Technology Electronics & Microelectronics
(MIPRO), 2009 32nd International Convention
on (pp. 203-208). IEEE.
Horvat, M., Popović, S., & Ćosić, K. (2012). Towards
semantic and affective coupling in emotionally
annotated databases. In MIPRO, 2012
Proceedings of the 35th International
Convention (pp. 1003-1008). IEEE.
Horvat, M., Grbin, A., & Gledec, G. (2013). WNtags:
A web-based tool for image labeling and retrieval
with lexical ontologies. Frontiers in artificial
intelligence and applications, 243, 585594.
Horvat, M., Popović, S., & Ćosić, K. (2013).
Multimedia stimuli databases usage patterns: a
survey report. In Information & Communication
Technology Electronics & Microelectronics
(MIPRO), 2013 36th International Convention
on (pp. 993-997). IEEE.
Horvat, M., Bogunović, N., & Ćosić, K. (2014).
STIMONT: a core ontology for multimedia
stimuli description. Multimedia tools and
applications, 73(3), 1103-1127.
Horvat, M., Vuković, M., & Car, Ž. (2016).
Evaluation of keyword search in affective
multimedia databases. In Transactions on
Computational Collective Intelligence XXI (pp.
50-68). Springer Berlin Heidelberg.
Jovic, A., Kukolja, D., Jozic, K., & Horvat, M. (2016,
May). A web platform for analysis of multivariate
heterogeneous biomedical time-seriesA
preliminary report. In Systems, Signals and Image
Processing (IWSSIP), 2016 International
Conference on (pp. 1-4). IEEE.
Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S.,
Yazdani, A., Ebrahimi, T., ... & Patras, I. (2012).
Deap: A database for emotion analysis; using
physiological signals. IEEE Transactions on
Affective Computing, 3(1), 18-31.
Kukolja, D., Popović, S., Horvat, M., Kovač, B., &
Ćosić, K. (2014). Comparative analysis of
emotion estimation methods based on
physiological measurements for real-time
applications. International journal of human-
computer studies, 72(10), 717-727.
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008).
International affective picture system (IAPS):
Affective ratings of pictures and instruction
manual. Technical report A-8.
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.
H., Hawk, S. T., & van Knippenberg, A. (2010).
Presentation and validation of the Radboud Faces
Database. Cognition and emotion, 24(8), 1377-
1388.
LoBue, V., & Thrasher, C. (2014). The Child
Affective Facial Expression (CAFE) set: Validity
and reliability from untrained adults. Frontiers in
psychology, 5.
Lundqvist, D., Flykt, A., & Öhman, A. (1998). The
Karolinska directed emotional faces (KDEF). CD
ROM from Department of Clinical Neuroscience,
Psychology section, Karolinska Institutet, 91-630.
Lyons, M., Akamatsu, S., Kamachi, M., & Gyoba, J.
(1998, April). Coding facial expressions with
gabor wavelets. In Automatic Face and Gesture
Recognition, 1998. Proceedings. Third IEEE
International Conference on (pp. 200-205). IEEE.
Marchewka, A., Żurawski, Ł., Jednoróg, K., &
Grabowska, A. (2014). The Nencki Affective
Picture System (NAPS): Introduction to a novel,
standardized, wide-range, high-quality, realistic
picture database. Behavior research
methods, 46(2), 596-610.
McDuff, D., Kaliouby, R., Senechal, T., Amr, M.,
Cohn, J., & Picard, R. (2013). Affectiva-mit facial
expression dataset (am-fed): Naturalistic and
spontaneous facial expressions collected.
In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition
Workshops (pp. 881-888).
Mehrabian, A. (1996). Pleasure-arousal-dominance: A
general framework for describing and measuring
individual differences in temperament. Current
Psychology, 14(4), 261-292.
Michałowski, J. M., Droździel, D., Matuszewski, J.,
Koziejowski, W., Jednoróg, K., & Marchewka, A.
(2016). The Set of Fear Inducing Pictures (SFIP):
Development and validation in fearful and
nonfearful individuals. Behavior Research
Methods, 1-13.
Palm, G., & Glodek, M. (2013). Towards emotion
recognition in human computer interaction.
In Neural nets and surroundings (pp. 323-336).
Springer Berlin Heidelberg.
Peter, C., & Herbon, A. (2006). Emotion
representation and physiology assignments in
digital systems. Interacting with
Computers, 18(2), 139-170.
Posner, J., Russell, J. A., & Peterson, B. S. (2005).
The circumplex model of affect: An integrative
approach to affective neuroscience, cognitive
development, and psychopathology. Development
and psychopathology, 17(03), 715-734.
Riegel, M., Żurawski, Ł., Wierzba, M., Moslehi, A.,
Klocek, Ł., Horvat, M., ... & Marchewka, A.
(2016). Characterization of the Nencki Affective
Picture System by discrete emotional categories
(NAPS BE). Behavior research methods, 48(2),
600-612.
Szymanska, M., Monnin, J., Noiret, N., Tio, G.,
Galdon, L., Laurent, E., ... & Vulliez-Coady, L.
(2015). The Besançon Affective Picture Set-
Adolescents (the BAPS-Ado): Development and
validation. Psychiatry research, 228(3), 576-584.
Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry,
T., Nurse, M., Hare, T. A., ... & Nelson, C. (2009).
The NimStim set of facial expressions: judgments
from untrained research participants. Psychiatry
research, 168(3), 242-249.
Villani, D., & Riva, G. (2012). Does interactive media
enhance the management of stress? Suggestions
from a controlled study. Cyberpsychology,
Behavior, and Social Networking, 15(1), 24-30.
Wierzba, M., Riegel, M., Pucz, A., Leśniewska, Z.,
Dragan, W. Ł., Gola, M., ... & Marchewka, A.
(2015). Erotic subset for the Nencki Affective
Picture System (NAPS ERO): cross-sexual
comparison study. Frontiers in psychology, 6,
1336.
Wu, D., Courtney, C. G., Lance, B. J., Narayanan, S.
S., Dawson, M. E., Oie, K. S., & Parsons, T. D.
(2010). Optimal arousal identification and
classification for affective computing using
physiological signals: Virtual reality stroop
task. IEEE Transactions on Affective
Computing, 1(2), 109-118.
Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S.
(2009). A survey of affect recognition methods:
Audio, visual, and spontaneous expressions. IEEE
transactions on pattern analysis and machine
intelligence, 31(1), 39-58.
... Affectively annotated multimedia purposely developed for regulated elicitation of emotional states represents a special type of digital content. Because of the purpose of their usage, these multimedia files are often referred to as stimuli and they are stored in affective multimedia databases [1]. In addition to the study of human emotion mechanisms, generation and appraisal of emotions, such databases have many other practical applications in the study of perception, memory, attention, and reasoning [1]. ...
... Because of the purpose of their usage, these multimedia files are often referred to as stimuli and they are stored in affective multimedia databases [1]. In addition to the study of human emotion mechanisms, generation and appraisal of emotions, such databases have many other practical applications in the study of perception, memory, attention, and reasoning [1]. ...
... In the literature it is also called the circumplex model of emotion, the PAD (Pleasure-Arousal-Dominance) model, or the Russell model of emotion [18] [19]. The dimensional model is the most often employed model of emotion for annotation of multimedia stimuli [1]. The underlying dimensional theory of emotion proposes that affective meaning can be well characterized by a small number of dimensions. ...
Conference Paper
Full-text available
Emotions are an omnipresent and important factor in the interaction and communication between people. Since emotions are an indispensable part of human life, it would accelerate the progress of artificial intelligence and other fields of science that require data about emotions if they could be adequately described by computer systems. Today there are many different theories of affect, but few of them are used in affective computing. Other areas of computing also benefit from structured and expressive data models of the affective domain, such as human-computer interaction and brain-computer interfaces. Typical tasks include automated recognition and analysis of emotional states, mental fatigue, individual motivation, vigilance and stress resilience. In this paper four often used models of emotion and cognitive behavior are listed and their properties explained: discrete, dimensional, appraisal and action tendency models. For each model, algorithms are provided for similarity measures that can be used to determine the relatedness between different stimulation and estimation artefacts in their respective emotion spaces. The goal of this article is to help professionals find the optimal emotion model for their research and quickly become familiar with data modelling of affective states.
... In contemporary affective multimedia databases, the structure of emotion is described with at least one of the two prevalent models of affect: discrete and dimensional [13]. These two models can both effectively describe emotion in digital systems but are not mutually exclusive. ...
... Currently there are many databases indexing affective information in multimedia [13], but to the best of our knowledge, the Nencki Affective Picture System (NAPS) is currently the largest database of visual stimuli with a comprehensive set of accompanying normative ratings and physical parameters of images (e.g., luminance) [14]. Furthermore, the NAPS has a typical architecture and file structure and employs data model generic to almost all affective picture databases [13]. ...
... Currently there are many databases indexing affective information in multimedia [13], but to the best of our knowledge, the Nencki Affective Picture System (NAPS) is currently the largest database of visual stimuli with a comprehensive set of accompanying normative ratings and physical parameters of images (e.g., luminance) [14]. Furthermore, the NAPS has a typical architecture and file structure and employs data model generic to almost all affective picture databases [13]. It has also been relatively recently developed. ...
Article
Full-text available
Clustering is a very popular machine-learning technique that is often used in data exploration of continuous variables. In general, there are two problems commonly encountered in clustering: (1) the selection of the optimal number of clusters, and (2) the undecidability of the affiliation of border data points to neighboring clusters. We address both problems and describe how to solve them in application to affective multimedia databases. In the experiment, we used the unsupervised learning algorithm k-means and the Nencki Affective Picture System (NAPS) dataset, which contains 1356 semantically and emotionally annotated pictures. The optimal number of centroids was estimated, using the empirical elbow and silhouette rules, and validated using the Monte-Carlo simulation approach. Clustering with k = 1-50 centroids is reported, along with dominant picture keywords and descriptive statistical parameters. Affective multimedia databases, such as the NAPS, have been specifically designed for emotion and attention experiments. By estimating the optimal cluster solutions, it was possible to gain deeper insight into affective features of visual stimuli. Finally , a custom software application was developed for study in the Python programming language. The tool uses the scikit-learn library for the implementation of machine-learning algorithms, data exploration and visualization. The tool is freely available for scientific and non-commercial purposes.
... Apart from digital objects, these databases contain meta-data about their high-level semantics and the expected emotion that will be induced in a subject when exposed to a contained document. Two important features distinguish affective multimedia databases from other multimedia repositories: (1) the purpose of the multimedia documents and (2) the emotional representation of the multimedia documents [8]. In our approach, binary classifiers based on lift charts are applied to improve precision and recall in text-based concept retrieval from these databases. ...
... Information 2020, 11, x FOR PEER REVIEW 3 of 20 semantics and the expected emotion that will be induced in a subject when exposed to a contained document. Two important features distinguish affective multimedia databases from other multimedia repositories: (1) the purpose of the multimedia documents and (2) the emotional representation of the multimedia documents [8]. In our approach, binary classifiers based on lift charts are applied to improve precision and recall in text-based concept retrieval from these databases. ...
... The most popular ones, which are free for use by researchers, are: the International Affective Picture System (IAPS) [9,10], the Nencki Affective Picture System (NAPS) [12] (with its extensions NAPS Basic Emotions [13] and NAPS Erotic Subset [14]), the Geneva Affective Picture Database (GAPED) [15], the Open Library of Affective Foods (OLAF) [16], the DIsgust-RelaTed-Images (DIRTI) [17], the Set of Fear-Inducing Pictures (SFIP) [18], the Open Affective Standardized Image Set (OASIS) [19], and the most recent one, the Children-Rated Subset to the NAPS [20]. In addition, a recent list of affective picture databases with different emotionally-annotated multimedia formats and exemplars of the research conducted with these database is available in [8]. An example of affective pictures from the OASIS database, with semantic tags included, is shown in Figure 2. The OASIS employs similar emotional and semantic models to the IAPS database. ...
Article
Full-text available
Evaluation of document classification is straightforward if complete information on the documents’ true categories exists. In this case, the rank of each document can be accurately determined and evaluated. However, in an unsupervised setting, where the exact document category is not available, lift charts become an advantageous method for evaluation of the retrieval quality and categorization of ranked documents. We introduce lift charts as binary classifiers of ranked documents and explain how to apply them to the concept-based retrieval of emotionally annotated images as one of the possible retrieval methods for this application. Furthermore, we describe affective multimedia databases on a representative example of the International Affective Picture System (IAPS) dataset, their applications, advantages, and deficiencies, and explain how lift charts may be used as a helpful method for document retrieval in this domain. Optimization of lift charts for recall and precision is also described. A typical scenario of document retrieval is presented on a set of 800 affective pictures labeled with an unsupervised glossary. In the lift charts-based retrieval using the approximate matching method, the highest attained accuracy, precision, and recall were 51.06%, 47.41%, 95.89%, and 81.83%, 99.70%, 33.56%, when optimized for recall and precision, respectively.
... Baze afektivne multimedije nisu relacijske baze u standardnom smislu već datotečni multimedijski repozitoriji [6]. One sadržavaju slike, zvukove, tekst ili videoisječke s označenim semantičkim i afektivnim sadržajem koji su namijenjeni upravljanom izazivanju emocionalnih stanja. ...
... Zbog ove namjene dokumenti u bazama afektivne multimedije nazivaju se i pobude, pa se stoga ove baze skraćeno nazivaju i bazama pobuda, odnosno bazama multimedijskih pobuda. Njihova primjena je najčešća u psihologiji, psihofiziologiji i neurologiji za istraživanje emocija i pažnje [6] [7]. Dvije karakteristične osobine razlikuju baze afektivne multimedije od drugih repozitorija multimedijskih podataka: 1) svrha ili namjena multimedijskih dokumenata, i 2) modeli emocija koji opisuju multimedijske dokumente [3]. ...
... U najnovijim bazama poput NAPS [8][9] semantika je opisana s više riječi i diskretnom semantičkom kategorijom. Sve baze za modeliranje i opis emocija koriste diskretni i dimenzijski modeli emocija [6] [10]. Afektivne oznake opisuju očekivano emocionalno stanje subjekata koji su izloženi dokumentima. ...
Article
Full-text available
Effective, accurate and fast process of multimedia stimuli generation, implemented as a computer system, is very useful to domain specialists in the fields of psychology, psychiatry and neuroscience in selecting and displaying excitation sequences, and as an aid in estimating the cognitive and affective parameters of the subject. The primary purpose of this paper is to present a computer system for multimedia stimuli generation and neurofeedback developed by the authors. The computer system is used with affective multimedia databases and is intended for research of emotions and attention, and for some types of psychotherapy with a behavioral component. The computer system was tested in an emotion elicitation experiment in virtual reality environment with 6 subjects. The eMotiv EPOC + 14-channel mobile EEG and HTC Vive virtual reality device were used. The pictorial stimuli were downloaded from the NAPS affective multimedia database and two sequences were generated with a total of 20 pictures to elicit the basic emotions of sadness and happiness. The results of the experiment confirm the correctness of the design and implementation of the computer system of the multimedia stimuli generator.
... Affective pictures have become increasingly popular in psychological, neuroscientific and clinical research on emotions over the last two decades (Horvat, 2017). According to the Web of Science database, the number of articles that cite publications retrieved under the topic "affective picture" rose from 34 articles in the year 2000 to 3,590 articles in the year 2018. ...
... SAM is a non-verbal method that permits intercultural comparisons and the inclusion of participants at a very young age (Bradley and Lang, 1994). To complement and extend the IAPS database, e.g., to increase the number of images with specific content, a number of additional databases have been developed in recent years (e.g., for EEG studies; Dan-Glauser and Scherer, 2011;Horvat, 2017). In the present study, we did not attempt to analyze all of these datasets because they are too numerous and diverse. ...
Article
Full-text available
Affective pictures are widely used in studies of human emotions. The objects or scenes shown in affective pictures play a pivotal role in eliciting particular emotions. However, affective processing can also be mediated by low-level perceptual features, such as local brightness contrast, color or the spatial frequency profile. In the present study, we asked whether image properties that reflect global image structure and image composition affect the rating of affective pictures. We focused on 13 global image properties that were previously associated with the esthetic evaluation of visual stimuli, and determined their predictive power for the ratings of five affective picture datasets (IAPS, GAPED, NAPS, DIRTI, and OASIS). First, we used an SVM-RBF classifier to predict high and low ratings for valence and arousal, respectively, and achieved a classification accuracy of 58–76% in this binary decision task. Second, a multiple linear regression analysis revealed that the individual image properties account for between 6 and 20% of the variance in the subjective ratings for valence and arousal. The predictive power of the image properties varies for the different datasets and type of ratings. Ratings tend to share similar sets of predictors if they correlate positively with each other. In conclusion, we obtained evidence from non-linear and linear analyses that affective pictures evoke emotions not only by what they show, but they also differ by how they show it. Whether the human visual system actually uses these perceptive cues for emotional processing remains to be investigated.
... Patient-personalized real-time multisensory stimuli generation uses semantically and emotionally annotated pictures, natural and artificial sounds, recorded speech, written text messages, video-clips and films or virtual reality synthetic environments, which reflects the underlying complexity of potential chronic psychopathology. Usage of existing and dedicated large databases of images and/or sounds, which are semantically (Deng et al., 2009) and emotionally annotated (Horvat, 2017) via AI-tools related to ontologies (Horvat et al., , 2012(Horvat et al., , 2014 and automated semantic segmentation (Chen et al., 2018), should optimize the delivery of the most relevant semantic and emotional stimuli content to the ex-COVID-19 patient during the intervention course, which is adapted to the patient's reactions in real time (Ćosić et al., 2010a). Real-time computation of multimodal neurophysiological, speech and facial/oculometric features, as well as real-time computation of optimal control multimodal closed-loop feedback stimuli is a particular software and hardware challenge. ...
Article
Full-text available
The COVID-19 pandemic has adverse consequences on human psychology and behavior long after initial recovery from the virus. These COVID-19 health sequelae, if undetected and left untreated, may lead to more enduring mental health problems, and put vulnerable individuals at risk of developing more serious psychopathologies. Therefore, an early distinction of such vulnerable individuals from those who are more resilient is important to undertake timely preventive interventions. The main aim of this article is to present a comprehensive multimodal conceptual approach for addressing these potential psychological and behavioral mental health changes using state-of-the-art tools and means of artificial intelligence (AI). Mental health COVID-19 recovery programs at post-COVID clinics based on AI prediction and prevention strategies may significantly improve the global mental health of ex-COVID-19 patients. Most COVID-19 recovery programs currently involve specialists such as pulmonologists, cardiologists, and neurologists, but there is a lack of psychiatrist care. The focus of this article is on new tools which can enhance the current limited psychiatrist resources and capabilities in coping with the upcoming challenges related to widespread mental health disorders. Patients affected by COVID-19 are more vulnerable to psychological and behavioral changes than non-COVID populations and therefore they deserve careful clinical psychological screening in post-COVID clinics. However, despite significant advances in research, the pace of progress in prevention of psychiatric disorders in these patients is still insufficient. Current approaches for the diagnosis of psychiatric disorders largely rely on clinical rating scales, as well as self-rating questionnaires that are inadequate for comprehensive assessment of ex-COVID-19 patients’ susceptibility to mental health deterioration. These limitations can presumably be overcome by applying state-of-the-art AI-based tools in diagnosis, prevention, and treatment of psychiatric disorders in acute phase of disease to prevent more chronic psychiatric consequences.
... We expect that establishing the ground-truth will greatly assist in personalization and a more effective learning. These multimedia documents are available in affective multimedia databases and can be used to establish individual baseline emotional responses [17]. ...
Conference Paper
Full-text available
Because of the global COVID-19 pandemic, online learning has become the dominant teaching method. Moreover, a wide range of e-learning pedagogies are rapidly gaining importance, and in some cases emerging as the preferred approach in education over the traditional methods and techniques of classroom teaching. However much has to be done to efficiently assess student engagement and the learning curve. In this regard, we have proposed construction of an intelligent agent for personalized and adaptive assessment of learning performance based on methods for automated estimation of attention and emotion. We report on the first progress towards the development of the intelligent agent. Three classifiers were used in parallel to detect information about the progress of student engagement. Object detection in video is accomplished with YOLOv3, emotion detection from facial expressions using PAZ software library, and detection of head, arms, and upper-body orientation and position with OpenPose system. NimStim facial expression database, WIDER Attribute Dataset, and UPNA Head Pose Database were used for experimental validation of the individual classifiers. Our system attained the highest precision and recall of 79.13% and 94.15%, respectively, and the highest success rate of 59.56% in recognition of 6 discrete emotions from facial expressions.
... The most used instrument to assess emotion is the International Affective Picture System (IAPS) that uses color photographs of a wide range of semantic categories as stimuli (Lang, Bradley, & Cuthbert, 2008). By using databases of standardized stimuli, the emotion elicitation effects can be easily replicated, allowing a more easy and reliable comparison between studies (Horvat, 2017). ...
Thesis
Full-text available
The emotional movie database (EMBD) is a database that elicits emotions using 40s clips with no audio as stimuli. Since its creation in 2012 the EMDB has been cited 55 times, in 37 articles, 10 doctoral thesis, 5 conference papers, 1 review, 1 master thesis and 1 book chapter. An objective of this thesis is to evaluate the impact of the EMDB in the scientific community. Moreover, it will also be presented the data from the stage 1 assessment of the new categories that are going to be included in the EMDB, namely social pain, social inclusion, extreme sports and pollution. The results of the pre-validation of the experiment 2 suggests that the clips for the new categories are distributed, as expected, along the affective space.
Conference Paper
Full-text available
Sequences of multimedia documents are successfully used in laboratory settings and in practice to deliberately elicit specific emotional reactions. To ensure a successful experiment the emotion provoking stimuli must be selected carefully and have a specific order in which they are presented to the participants. Temporal aspect – duration of individual stimuli within sequences, duration of whole sequences and pauses between stimuli and sequences – must also be chosen with great care. Construction of effective sequences is a delicate and time consuming activity which requires significant group manual effort from domain experts. To facilitate this task we propose a new ontology called StimSeqOnt for formal description of stimuli sequences. The ontology is written in OWL DL language and provides formal and sufficiently expressive representation of affective concepts, high-level semantics, stimuli documents, multimedia formats and repositories used. In StimSeqOnt all relevant metadata about stimuli sequences may be stored as formal concepts. If available, elicited physiological data of previously exposed participants are available for comparison thereby enabling prediction of emotional responses. The StimSeqOnt is designed in compliance with ontology guidelines to facilitate sharing and reuse of expert knowledge.
Article
Full-text available
Emotionally charged pictorial materials are frequently used in phobia research, but no existing standardized picture database is dedicated to the study of different phobias. The present work describes the results of two independent studies through which we sought to develop and validate this type of database—a Set of Fear Inducing Pictures (SFIP). In Study 1, 270 fear-relevant and 130 neutral stimuli were rated for fear, arousal, and valence by four groups of participants; small-animal (N = 34), blood/injection (N = 26), social-fearful (N = 35), and nonfearful participants (N = 22). The results from Study 1 were employed to develop the final version of the SFIP, which includes fear-relevant images of social exposure (N = 40), blood/injection (N = 80), spiders/bugs (N = 80), and angry faces (N = 30), as well as 726 neutral photographs. In Study 2, we aimed to validate the SFIP in a sample of spider, blood/injection, social-fearful, and control individuals (N = 66). The fear-relevant images were rated as being more unpleasant and led to greater fear and arousal in fearful than in nonfearful individuals. The fear images differentiated between the three fear groups in the expected directions. Overall, the present findings provide evidence for the high validity of the SFIP and confirm that the set may be successfully used in phobia research.
Article
Full-text available
Research on the processing of sexual stimuli has proved that such material has high priority in human cognition. Yet, although sex differences in response to sexual stimuli were extensively discussed in the literature, sexual orientation was given relatively little consideration, and material suitable for relevant research is difficult to come by. With this in mind, we present a collection of 200 erotic images, accompanied by their self-report ratings of emotional valence and arousal by homo- and heterosexual males and females (n = 80, divided into four equal-sized subsamples). The collection complements the Nencki Affective Picture System (NAPS) and is intended to be used as stimulus material in experimental research. The erotic images are divided into five categories, depending on their content: opposite-sex couple (50), male couple (50), female couple (50), male (25) and female (25). Additional 100 control images from the NAPS depicting people in a non-erotic context were also used in the study. We showed that recipient sex and sexual orientation strongly influenced the evaluation of erotic content. Thus, comparisons of valence and arousal ratings in different subject groups will help researchers select stimuli set for the purpose of various experimental designs. To facilitate the use of the dataset, we provide an on-line tool, which allows the user to browse the images interactively and select proper stimuli on the basis of several parameters. The NAPS ERO image collection together with the data are available to the scientific community for non-commercial use at http://naps.nencki.gov.pl.
Article
Full-text available
The Nencki Affective Picture System (NAPS; Marchewka, Żurawski, Jednoróg, & Grabowska, Behavior Research Methods, 2014) is a standardized set of 1,356 realistic, high-quality photographs divided into five categories (people, faces, animals, objects, and landscapes). NAPS has been primarily standardized along the affective dimensions of valence, arousal, and approach–avoidance, yet the characteristics of discrete emotions expressed by the images have not been investigated thus far. The aim of the present study was to collect normative ratings according to categorical models of emotions. A subset of 510 images from the original NAPS set was selected in order to proportionally cover the whole dimensional affective space. Among these, using three available classification methods, we identified images eliciting distinguishable discrete emotions. We introduce the basic-emotion normative ratings for the Nencki Affective Picture System (NAPS BE), which will allow researchers to control and manipulate stimulus properties specifically for their experimental questions of interest. The NAPS BE system is freely accessible to the scientific community for noncommercial use as supplementary materials to this article.
Article
Selecting appropriate stimuli is a major challenge of affective research. Although several standardized databases for affective pictures exist, none of them focus on discrete emotions such as disgust. Validated pictures inducing discrete emotions are still limited, and this presents a problem for researchers interested in studying different facets of disgust. In this paper, we introduce the DIsgust-RelaTed-Images (DIRTI) picture set. The set consists of 240 disgust-inducing pictures divided into six categories (food, animals, body products, injuries/infections, death, and hygiene). Additionally, we included 60 matched neutral pictures (10 per category). All pictures were rated by 200 participants on nine-point rating scales measuring disgust, fear, valence, and arousal. The present validation study covered a wide age range (18-75 years) with a balanced number of participants in each decade of life. For each picture, we provide separate ratings on the four scales for men and women. In addition to the original pictures, we also provide a luminance-matched version for experiments that require control of the physical properties of the pictures. The standardized DIRTI picture set allows researchers to chose from a wide set of disgust-inducing pictures and may enhance researchers' ability to draw comparisons between studies on disgust. (Download DIRTI picture set: http://dx.doi.org/10.5281/zenodo.167037).
Article
Multimedia documents such as pictures, videos, sounds and text provoke emotional responses of different intensity and polarity. These stimuli are stored in affective multimedia databases together with description of their semantics based on keywords from unsupervised glossaries, expected emotion elicitation potential and other important contextual information. Affective multimedia databases are important in many different areas of research, such as affective computing, human-computer interaction and cognitive sciences, where it is necessary to deliberately modulate emotional states of individuals. However, restrictions in the employed semantic data models impair retrieval performance measures thus severely limiting the databases’ overall usability. An experimental evaluation of multi-keyword search in affective multimedia databases, using lift charts as binomial classifiers optimized for retrieval precision or sensitivity, is presented. Suggestions for improving expressiveness and formality of data models are elaborated, as well as introduction of dedicated ontologies which could lead to better data interoperability.
Article
Emotional pictures are commonly used as visual stimuli in a number of research fields. Choosing relevant visual stimuli to induce emotion is fundamental in attachment and affective research. Attachment theory provides a theoretical basis for the understanding of emotional and relational problems, and is especially related to two specific emotions: distress and comfort. The lack of normalized visual stimuli soliciting these attachment-related emotions has led us to create and validate a new photographic database: the Besançon Affective Picture Set-Adolescents. This novel stimulus set is composed of 93 photographs, divided into four categories: distress, comfort, joy-complicity and neutral. A group of 140 adolescents rated the pictures with the Self-Assessment Manikin system, yielding three dimensions: valence, emotional arousal, and dominance. The pictures were also assessed, using a continuous scale, for different emotions (distress, hate, horror, comfort, complicity and joy). The ANOVAs for arousal and the Kruskal–Wallis tests for valence and dominance showed strong effects for category. However, for comfort and complicity, the dimensions of valence and dominance were not significantly different, while results for arousal showed no significant difference between complicity and distress. Our study provides a tool that allows researchers to select visual stimuli to investigate attachment-related emotion processing in adolescence.
Chapter
The recognition of human emotions by technical systems is regarded as a problem of pattern recognition. Here methods of machine learning are employed which require substantial amounts of 'emotionally labeled' data, because model based approaches are not available. Problems of emotion recognition are discussed from this point of view, focusing on problems of data gathering and also touching upon modeling of emotions and machine learning aspects.