Conference PaperPDF Available

An Empirical Study of Proactive Multimedia Therapy Contents for Public: Production Design and Cognitive Response Measurements

Authors:

Abstract and Figures

Whether it is under conscious or unconscious circumstances, people in today's society experience a range of multi-sensory stimulations through diverse confronted media that bring changes in feelings or moods. Multimedia contents including arts and therapies are being developed increasingly nowadays, but their cognitive mechanisms and effects are still ambiguously ascertained. Can we really induce target emotions from audiences with short multimedia contents (60 sec. long) that employed abstractive visuals and non-lyrical musical expressions? If yes, would there be any common thread in audience responses to these purpose-driven new creations? In this study, under a hypothesis that it is possible to create certain emotion/mood inducing multi-modal contents, we first researched various psychology (and/or therapy) fields (e.g., music, colors, images, and motiongraphics) for guidelines to design three specific types of positive emotion elicitations (i.e., Relaxation, Happy, and Vigorous), and produced audio-visual contents based on the learned expressive attributes. Then we investigated the response of 12 subjects (6 males and 6 females, mean age 22 year old) on their EEG power differences between rest and watching movie sections, alpha asymmetry, cognitive performances during visual congruent continuous performance tasks (cCPT, attentional task), and self-evaluation questionnaires. We concluded that emotional/mood induction using multi-modal contents could bring out changes in attention, visible from a behavioral study, however milder in the electrophysiological response.
Content may be subject to copyright.
An Empirical Study of Proactive Multimedia Therapy Contents for Public: Production
Design and Cognitive Response Measurements
Irene Eunyoung Lee,*1 Charles-François Latchoumane,#2 Jaeseung Jeong*,#3
*Department of Graduate School of Culture and Technology, Korea Advanced Institute of Science and Technology (KAIST),
Republic of Korea
#Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea
1irenelee@kaist.ac.kr, 2hsj@raphe.kaist.ac.kr, 3jsjeong@kaist.ac.kr
ABSTRACT
Whether it is under conscious or unconscious circumstances, people
in today’s society experience a range of multi-sensory stimulations
through diverse confronted media that bring changes in feelings or
moods. Multimedia contents including arts and therapies are being
developed increasingly nowadays, but their cognitive mechanisms
and effects are still ambiguously ascertained. Can we really induce
target emotions from audiences with short multimedia contents (60
sec. long) that employed abstractive visuals and non-lyrical musical
expressions? If yes, would there be any common thread in audience
responses to these purpose-driven new creations? In this study, under
a hypothesis that it is possible to create certain emotion/mood
inducing multi-modal contents, we first researched various
psychology (and/or therapy) fields (e.g., music, colors, images, and
motiongraphics) for guidelines to design three specific types of
positive emotion elicitations (i.e., Relaxation, Happy, and Vigorous),
and produced audio-visual contents based on the learned expressive
attributes. Then we investigated the response of 12 subjects (6 males
and 6 females, mean age 22 year old ) on their EEG power differences
between rest and watching movie sections, alpha asymmetry,
cognitive performances during visual congruent continuous
performance tasks (cCPT, attentional task), and self-evaluation
questionnaires. We concluded that emotional/mood induction using
multi-modal contents could bring out changes in attention, visible
from a behavioral study, however milder in the electrophysiological
response.
INTRODUCTION
Hurriedly living people in modern urban society make
efforts to earn equilibrium in mind and body through various
refreshing treatments within the proximity range of the daily
life like cultural experiences (e.g., gallery visiting or
performance watching) or various exercises (e.g., yoga,
meditation, or fitness). It is unclear whether it was caused by
the practical constraints of required time consumptions for
these activities or by the cultural-evolutionary aftereffects of
technology development, there appeared some interesting new
attempts in new media (or multimedia) arts that exhibit
therapeutic stances of delivering positive meditational
narratives utilizing various artistic expressions (e.g.,
Color-Therapy Paintings <http://www.thecnm.co.kr/> and The
Gift of Inner Peace <http://www.giftofpeace.org/home.html>.)
Also, since 1980s, there emerged a genuine, new artistic field
that made realizations of organically linked audio-video that
engaged in abstractive visuals and music compositions (e.g.,
John Whitney’s Digital Harmony and Brian Eno’s 77 Million
Paintings.) However, there have been not enough
scholastically and scientifically formulated close examinations
about what arousals these new arts generate to their audiences
or how these new expressions get perceived (or appreciated).
It can easily presumed how intricate it would be to deploy an
empirical experiment study of proactive multimedia therapy
contents for A) it requires scrupulous inspections in making
convincing production rules, verifying the validity of the
synthetically created new multimedia therapy contents, and
examining the stimulated effects, and B) it has too many
influencing factors (e.g., musical factors, visual factors,
personal factors, situational factors, and etc.) that of each
modality furthermore mutually interact (Bolivar, Cohen, &
Fentress, 1994; Iwamiya, 1994; Rosar, 1994; Schlesinger, ;
Sirius & Clarke, 1994; Somsaman, 2004). In addition, devising
good regulations for creative delimitations of stimulus
materials (e.g., preserving consistency in artistic expressions
between three different contents) could be an important control
factor of managing variances in contexts of the contents and
measurements of the experiment. Despite of all complex
attributes of the study, we resolutely commenced our
experiment to make more realistic and practical approaches on
examinations of perceptions and associative effects of
audio-visual contents when a Korean cosmetic conglomerate
inquired us to produce three types of multimedia therapy
contents for public. Those contents would be distributed as the
cosmetic brand’s customer management and promotional
dimensional service.
Our main hypothesis was that we could create certain
emotion/mood inducing abstractive audio-visual contents
synthesized by artistic creativities by utilizing appropriately
selected expressive attributes of structural properties in
auditory, visual, and moving image domains based on
psychological evidences from researches of previous studies.
Cognitive and psychological basis of what particulars were
chosen as guideline expressive attributes to create three
different emotion eliciting audio-visual contents and how the
effects of the contents were measured with behavioral test and
EEG recording experiments are explained in this paper to
makes a conscientious discussion about findings of the study.
RELATED STUDIES
Studies about functions and multi-dimensionality of musical
and visual expressions in multimedia stimuli have been
discerned from diverse perspectives including Musicology,
Music Psychology, Music Cognition, Aesthetics, Film,
Animation, and Multimedia Communication (Alpert & Alpert,
1990; Bolivar et al., 1994; A. Cohen, 1999; A. J. Cohen, 2005;
Field, 2007; Sirius & Clarke, 1994). Most of the previous
studies have been deployed music and visual agents from
pre-existent art works in their experiments. Nonetheless, in the
real world, creative people in the entertainment and advertising
ISBN: 978-4-9904208-0-2 © 2008 ICMPC10
Proceedings of the 10th International Conference on Music Perception and Cognition (ICMPC 10). Sapporo, Japan.
Ken’ichi Miyazaki, Yuzuru Hiraga, Mayumi Adachi, Yoshitaka Nakajima, and Minoru Tsuzaki (Editors)
247
industry strive to produce appropriate new music/sound and
new visual, and implement all the newly created, appealing
modalities into a holistic substance for effective message
conveyance to the viewers. Hence, it was reasonable to
experiment an empirical study that can add some fresh,
practical dimensions into our psychological and cognitive
researches about multimedia perception. In other words, we
wanted to experiment a study that can provide insightful
examinations about the mechanism of musical and visual
perception through viewer’s appreciations of newly created
purpose-driven contents produced on the basis of the
accumulated knowledge about affective structural components
of the musical and visual stimuli for certain mood elicitation.
Music and painting are known for their ability to elicit
emotion. The emotionality of art often is discussed as a concept
that it alters the listener’s affective state sometimes, and some
other times that it has emotion embedded within itself
regardless of effects upon the listener (Gabrielsson &
Lindstrom, 2001). Music has better ability to elicit simple
positive and negative emotion than complex semantic nuances
in emotion (Ortony, Clore, & Collins, 1988); and, the structural
designs of a musical piece makes listeners to expect and predict
alterations in the intensity of felt emotion (Sloboda & Juslin,
2001). The composition of various structural elements in a
painting could result emotions and bring out aesthetical
preference from viewers like music. Empathetic responses to
image and to works of art have been studied in theories of
physiognomic and synesthesia perception (Cupchik, 1995;
Marks, 1996; Wapner, 2005); and, neuroscientific research has
begun to unveil the basis recently (Freedberg & Gallese, 2007;
Jacobsen, Schubotz, Hofel, & Cramon, 2006; Vartanian &
Goel, 2004).
Recently some studies exhibited possibilities to make a sub
categorization in diverse music-and-video contents into a few
lineages based on their structural emphasis-degree
differentiations between the video and music elements to make
findings about the impact of modalities (A. J. Cohen, 2005;
Ebendorf, 2007; Field, 2007; Kendall, 2005). In other words,
there are 1) many audio-visual contents exhibiting higher drive
on video than audio (e.g., most music videos, films, and
interactive games), 2) plenty others exhibiting higher drive on
audio than video (e.g., music-driven computer generated
animations in Windows Media Player and VJed movies in
clubs), and 3) a few multimedia contents intermittently
presenting (marginally) balanced emphasis on both domains
(e.g., abstractive animation). Studies about auditory and visual
interplays, their modality-dependent impacts, and cross-modal
similarities have been quickened recent years (Ebendorf, 2007;
Field, 2007; Marks, 1996).
CHALLENGES OF THE STUDY
To experiment our hypothesis, we made the study in this
logic: 1) setup structural expressive attributes and guidelines to
design contents for three specific types of positive emotion
elicitations 2) produced multimedia contents based on the
learned expressive attributes to elicit distinctively unique,
positive mood changes utilizing artistic expressions 3)
confirmed mood changes by self-survey, and 4) investigated
whether the mood changes is visible using behavioral test and
electro-physiological tests. Some required contemplations and
delimitation of the study found in the course of our experiment
design should be discussed briefly.
A. Defining Perceptual Attributes
Three distinctive target emotions/moods suitable as positive
therapy contents were chosen: Happy, relaxation, and vigorous,
orderly named from the simplest emotion to the most complex
semantic nuances. The contents were planned to be published
in web portal service in Korea as both streaming and
downloadable media; hence, we had to consider that the
produced contents could be delivered to any internet users in
anytime once distributed. Therefore, one of the most important
aims of our production design was to create contents that could
generate positive moods/emotions changes to as many general
viewers as possible. In other words, the contents should carry
effective and communicative narratives that can be enjoyed by
various unspecific viewers. For best congruencies between
video and music meanings, the package must be
well-synchronized to each other (Iwamiya, 1994). In other
words, the contents should have balanced emphasis and
conformed narratives between visual and sound modalities
utilizing structural attributes in visual (e.g., colours, shapes,
lines, rhythm, and movement direction) and in music (e.g.,
tempo, mode, harmony, instrumentation, lyrics, and melody
contour). Considering those conditions, we decided to create
images similar to abstractive paintings with colour variances
rather than representational images; and, to use non-verbal
music than music with lyrics would be suitable for broad range
of unknown audiences. These options were to leave more
open-possibilities that the core narratives (or feelings) of the
contents could be conveyed over any cultural and social
boundaries.
B. Too Many Influencing Factors
Creating public multimedia therapy contents eliciting target
emotion is challenging for the artistic preference of multimedia
contents appreciation could be dependent on the influence of
personality traits, previous experience of art, and many other
environmental variables (Furnham & Walker, 2001; Rentfrow
& Gosling, 2003). Also, deploying an empirical study that is
interdisciplinary between art (i.e., creative expressions in
music and visuals) and cognitive study (i.e., perception and
emotion) confronts several challenges and difficulties by its
ambiguous nature. However, we could extract useful structural
attributes of music and visual in relation with our study based
on related previous psychotherapy and perception studies. For
music, we designed structural guidelines for music with
attributes associate with time (e.g., tempo, rhythm, and melodic
contour), pitch, and texture-related components (e.g.,
instrumentation, harmony, and mode). For example, happiness
could be conveyed by fast (upbeat) tempo, rising tempo, high
pitch, and major mode (A. Cohen, 1999; Gabrielsson &
Lindstrom, 2001; Levi, 1982). For visuals, we devised
structural attributes associated with time, movement, colour,
and figure-related components. For example, happiness could
by conveyed by rhythmical changes in speed, many scene
changes, and colours such as red, yellow, and orange (Birren,
1978; Jungmin, 2004; Sahng-hee, 2004; Walker, 2002; Zettl,
1973). More detailed explanations about how we approached
248
to elicit target emotions through auditory-visual contents
utilizing some structural attributes follow in next section.
METHODS
A. Subjects
For this study, we recruited 6 males (23±2.5 year old) and 6
females (21.5±2.3 year old) from the Chungnam University,
Daejeon, South Korea. The participant certified not to suffer
from any mental disorder or to present a history of drug abuse
of any sort. The subjects provided a signed consent form after
receiving an explanation about the purpose and procedure of
the experiment. The study was approved by the Institutional
Review Board of KAIST (Korean Advanced Institute of
Science and Technology, South Korea).
B. Stimulus Materials
Three multimedia contents that stimulate both visual and
auditory senses were designed by applying recognized
psychological approaches of music, visual and color elements
to elicit target moods. Overall look and feel of the three movies
was carefully controlled in their artistic expressions. Visual
elements were designed by a group of two artists’ collaboration
and sound elements were designed by a single sound artist.
Visual and audio creative productions were interrelated to each
other from idea developments till mastering the final movies to
achieve effective synchronization and quality control. All final
movies were built in MOV format (Sorenson Video 3) in 400 x
300 (pixels) with approx. 1 minute length for online
distribution, and later on they were converted into AVI format
to be used for the research purpose. Outlines of structural
elements of each movie are described in Table 1 and 2.
Table 1. Structural Elements of Music.
1. Relaxation 2.Happy 3. Vigorous
Main
Instr.
Daegeum
(Korean Flute) Female Vocal Electronic
Lead Sound
Melodic
Shape
Descending
Toward End
Ascending
Toward End
Building Up &
Down
Meter,
Speed
(bpm)
4/4,
Freely Slow
(approx. 60)
4/4,
Fast (112)
4/4,
Fast (174 ->
87)
Musical
Style
Korean,
Orchestral Pads
Latin,
Percussive
Drum and
Bass,
Electronic
Ending
Type Fade Ending Button Ending Button Ending
C. Self-Assessment Survey
In absence of autonomic variables (e.g. electrocardiogram,
skin conductance, resistance), we validated the elicited mood
change in participants using a self-assessment questionnaire
based on bipolar adjective over a nine-point rating scale and
inspired from (Brauchli, Michel, & Zeier, 1995) to assess the
arousal and self well-being. The subjects were asked rate their
mood after watching the movie and without acknowledging the
intended mood of the movie.
Table 2. Structural Elements of Visual.
1. Relaxation 2.Happy 3. Vigorous
Main
Motive
(Shape)
Water Droplets,
Round
Water Droplets,
Round
Water
Fountains, Jets
of Water
Color
Theme
Deep and
Solitude Warm Burst Out
Motion-
graphic
Direction
& Points
Zoom In/Out,
Fast Cuts,
Contrasts
between
Images
Color
Pallets
blue, green,
purple, white,
brown
red, orange,
yellow
blue, green,
yellow, white
Movie
Image
Sample
Number
of Scene
Changes
Small Many Many
Rhythmic
Changes
Slow and Weak
Less Scene
Changes
Strong and Fast
Changes.
Long Term of
Exposure
Many Scene
Changes.
Contrasting
between
Different
Images
D. Experimental Design and CPT
For each of the three movies, the participants first observed
a 20-second eyes open resting state fixing a visual cue then
were ask to perform a congruent continuous performance task
(CPT), then following a brief resting period, the subjects
watched one of the movie and then performed again on a
congruent CPT. The three movies were presented in a random
order so that the experiment would be free from the influence
of a specific sequencing. For each CPT task, the subject was
ask to push a single button if a target letter appeared after a
designated cue or to refrain from pushing the button if the letter
following the cue was not the target letter. Each letter were
presented during 200 msec with an interval of 1 sec and the
total number of 80 presented letters including about 16 target
letters and three to five false cue. Subjects with too bad
performance at the CPT task were removed (i.e. showing too
low attention), thus remained 6 to 9 subjects for the statistical
analysis.
E. EEG
1) EEG Recordings
For each mental state (i.e. resting eye open, watching movie,
CPT), the electroencephalograms (EEGs) of the participants
were recorded and digitalized at a frequency of 1000 Hz over
249
17 leads disposed according to the 10-20 international system
using a NeuroScan EEG recording machine. In addition, we
recorded the eye movement and eye blinking (HEO, VEO).
The EEG time series are treated using Butterworth II filter
1-45Hz, and artifacts resulting from eye movement, eye
blinking and head movement were removed using ICA. We
analysed the EEGs of participant at a resting state (Baseline)
preceding the display of the stimulus, and during the display of
the movie.
2) EEG Analysis
We calculated the Individual Alpha Frequency (IAF)
(Doppelmayr, Klimesch, Pachinger, & Ripper, 1998; Sammler,
Grigutsch, Fritz, & Koelsch, 2007) of each participant from the
baseline preceding each movie and used the mean value to
estimate the range of each frequency band under study. We
obtained IAF = 10.23ρ0.22 Hz (Relaxation), IAF = 9.94ρ0.25
Hz (Happiness) and IAF = 10.28ρ0.28 Hz (Vigorous).
We calculated the spectral power using a FFT
transformation and filtered the power over the theta (IAF x 0.4
to IAF x 0.6 Hz), alpha 1 (IAF x 0.6 to IAF x 0.8 Hz), alpha 2
(IAF x 0.8 to IAF x 1 Hz) and alpha 3 (IAF x 1 to IAF x 1.2 Hz),
and beta (IAF X 1.2 to IAF x 30 Hz) bands. Moreover, we
considered the delay activation of the autonomic system in
response to emotional elicitation by separating the analysis of
each stimulus into two sections of 22s (i.e. watching Part I &
Part II). We compared the power difference between the
baseline and the Part I, and between baseline and Part II. We
discarded one subject due to too high presence of noise in the
data.
In addition, we focused on the alpha power band in the
frontal region (Schmidt & Trainor, 2001; Tsang, Trainor,
Santesso, Tasker, & Schmidt, 2001) in order to quantify the
valence and intensity of the emotion elicited while watching
the movies.
F. Statistical Analysis
We assessed the significant difference between the three
movies over the 13 mood characteristics first analyzing the
between-group factor “movie” and the within group factor
“mood characteristic” using a repeated measure of ANOVA
and as a post-hoc test we used a one-way ANOVA with Welch
correction for homogeneity and LSD/Bonferroni correction for
multiple comparison. We assessed the difference in the
behavioral response to the CPT (i.e. single index standard
deviation of response time) before and after watching the
movie using between subject factor “trial” (i.e. before and after
watching the movie) and within subject factor “movie”. For the
post hoc analysis, we applied a non parametric Mann-Whitney
Test for the CPT index “standard deviation of the response
time”.
For the EEG study, we used MANOVA test in order to
observe effect on the change in power, considering the
“channels” as dependent variables, and “movie” and “time” (i.e.
watching Part I & Part II) as within-subject effects. In case of
significant effect of the factor “time”, we considered the post
hoc multiple Student’s t-test comparing Part I and Part II to
baseline with a Holm-Sidak correction on p-values.
RESULTS
A. Survey Result
The rANOVA reported an influence of the within subject
factor “mood” (F(4.478,9.492)<0.001) and the combines of
factor “mood X movie “(F(8.956,4.849)<0.001) as significant
and no influence was found for the between subject factor
“group”.
Figure 1. Self-Rating of Participants’ Mood. The significant
difference between the 3 combines of movies [1-2,1-3,2-3] and, *
and ** are respectively for p<0.05 and p<0.01. Note, confidence to
balance deal with all subjects, the others only consider 8 subjects
(missing data).
As shown in Fig. 1, we found that the rating of the mood
confidence (F(2,33,5.389)=0.009), balance
(F(2,21,6.256)=0.007), exhaustion (F(2,21,3.878)=0.037),
happiness (F(2,21,7.069)=0.004), tension
(F(2,21,9.026)=0.001), energy (F(2,21,11.071)<0.001) and
tiredness (F(2,21,10.947)=0.001). The corrected multiple
comparison test showed that most of the significant differences
were found between the movie 1 (relaxation) and movie 2
(happiness) and movie 1 (relaxation) and movie 3
(confidence).
B. EEG Result
1) Power difference between rest and watching part I & II:
We globally found a decrease of power in all band and all
region for after watching the movies. We plotted the figure
considering from top to bottom movie relaxation, happiness
and vigorous, respectively, and from left to right, watching part
I and part II, respectively.
a) Theta:
We found a sole significant decrease between resting and
watching part II at the lead Pz (central-parietal). The lead Fp1
for the movie happiness showed almost significant decrease in
power.
Note: interestingly the movie vigorous showed an increase in
the central leads, higher in the frontal region during the part II.
250
Since the difference is quite important, we will investigate
more about the reason of the non significance of this effect.
b) Alpha1,2 and 3:
The three sub-bands of alpha all show highly significant
decrease in power after watching for the movie relaxation and
happiness, in the posterior region mainly. This effect is the
strongest in the alpha2 band, especially for the movie
happiness.
c) Beta:
We found significant decrease of power in the beta band in
the central posterior region for the movie happiness, and
almost significant for the central frontal region for relaxation.
2) Alpha asymmetry:
From the figure, a slight alpha asymmetry is visible for
almost all the movies with the tendency left-hemisphere/low
power and right hemisphere/high power, which confirms the
positive valence of the movies.
Moreover, most of significant differences (decrease in
power) are found in the left frontal part (not right frontal)
which might highlight this tendency. However, the significant
difference between the two hemispheres is not congruent. We
found a single significant difference between left and right (as
expected) for the movie 2 (happiness) at the alpha 1 band and
while watching part II (student t-test, p = 0.03).
Fig. 2:
ower difference between baseline and first half, and
baseline and second half while watching the movie 1.
Fig. 3: power difference between baseline and first half, and
baseline and second half while watching the movie 2.
Fig. 4: power difference between baseline and first half, and
baseline and second half while watching the movie 3.
251
C. Behavioral Result
We analyzed the behavioral response of the subjects after
analyzing response speed (standard deviation was chosen as
most significant index of attention change (van den Bosch,
Rombouts, & van Asma, 1996) for the correct answer during
the CPT before and after watching the three movies. We
discarded the subject who showed too high commission or
omission error in order suppress the interference with subject
with too low attention during the task, also synonymous of lack
of sleep or undiagnosed cognitive dysfunction. By this process,
4 subjects were discarded for the behavioral analysis.
We found that the combines of factors “trial X movie”
(F(1.564,6.572)=0.009) significantly influence the speed
response of the subjects, but no effect were found for the main
factor “Trial” and the within subject factor “Movie”. In post
hoc test, we found that the standard deviation of the response
speed significantly decreased after watching the movie 1
(Relaxation; F(14,4.637)=0.049) and significantly increased
for the movie 2 (happiness; F(14,6.898)=0.020).
DISCUSSION
The purpose of the present EEG study was to gain insights
into the proactive therapy contents production design and brain
mechanisms underlying the processing of mood eliciting by
multimedia contents. We investigated the effect of rule-based,
newly created positive emotional multimedia contents and
subjective, behavioral response, and psychophysiological
indicators of emotional processing.
The subjects rated the movie 1 (relaxation) quite moderately
for all the moods including sensation of calm, lightness,
peacefulness, balance and comfort, but also felt a certain
exhaustion, tiredness or dullness, indicating a certain sensation
of low energy with no specific tension or relaxed feeling. The
movie 2 (Happiness) induced a high sensation of confidence,
cheerfulness and lightness, and moderate energy, liveliness,
freshness, and comfort. However, the subjects indicated an
unsafe (high) and unbalanced sensation for the movie 2, which
seems quite contradictory to the other sensations. About the
movie 3 (Vigorous), the subjects reported moderate confidence,
comfort, and high energy, liveliness, lightness, and freshness
sensations.
According to the response time during the CPT, the movie 1
and movie 2 seems to have influenced the subject behavioral
response the most. We observed a slight decrease in the mean
response time for the two movies (i.e. higher attention) with an
important and significant decrease in the standard deviation of
the response time for the movie 1, indicating a certain higher
reactivity of the subjects, and a significant increase in the
standard deviation of the response time for the movie 2,
indicating lower reliability in the response pattern.
The global finding is a decrease of power in all band
pronounced for the movie relaxation and happiness. More
detail analysis might be done considering the gender difference
and the specificities of each movies (e.g., strongest climax
location, i.e., part I or part II, etc.) ~e will investigate more
about high theta increase in the central leads showed during the
part II of move 3. We will also analyze our EEG recording
further utilizing sLoreta to investigate more about each target
emotion and its ROI (Region of Interest). We expect to see
some activation in limbic system and ACC (Anterior Cingulate
Cortex) to find out if the contents elicited emotions from the
audiences. Our findings using sLoreta may be discussed in the
oral presentation and our final report of the study through
journal publication.
These three pilot contents were publicly published by an
on-line portal site in Korea in late 2006, and it drew out a
significantly high number of view counts (over 250,000 times).
The online promotional campaign of the multimedia contents
has brought the most successful general banner click-rates and
after-notes registrations for UCC in the event homepage. The
product recommendation rate raised from 6% to 9.7 %
(DaumCommunications, 2007). From communication point of
view, the study brought very successful results from both
viewers’ responses and marketing effects.
Limitation: The present study included only auditory-visual
stimulations and rest condition. The lack of “music only”
stimuli makes it somewhat curtailed to investigate the brain
mechanisms of multi-modal (i.e., auditory-visual) content
perceptions. The CPT study included correct response only.
Subjects with too bad scores at the CPT task were discarded to
ensure that they kept sufficient attention during the task.
Making interpretations of power ratio difference during CPT
and different EEG power responses of each movie in
accordance with particular characterizations of each movie
could be investigated more with focuses on the audio and
visual synchronization emphasis.
CONCLUSSION
We conclude our study as correct elicitation of response for
the movie 1 and 2 with coherence in the physical response
(CPT, EEG), and for movie 2 and 3 while watching (EEG). In
the EEG, pleasant emotions were accompanied by asymmetry
with the tendency left-hemisphere/low power and right
hemisphere/high power. These findings are consistent with the
frontal activation/emotion valence models (Davidson, 1995;
Schmidt & Trainor, 2001; Tsang et al., 2001); however,
contrary to reports of no hemispheric lateralization in previous
EEG emotion studies (Hagemann, Naumann, Becker, Maier, &
Bartussek, 1998; Sammler et al., 2007).
ACKNOWLEDGMENT
Thank you for research crew members for EEG recording,
data analysis, and production design of the this study; Dong-il
Jung, Jaewon Lee, Hyosub Lee, Kisub Lee, .Jiyun Shin, Soyun
Song, and Sungmin Park. The study was sponsored by
Laneige.
REFERENCES
Alpert, J. I., & Alpert, M. I. (1990). Music influences on mood and
purchase intentions. Psychology and Marketing, 7(2), 109-133.
Birren, F. (1978). Color Psychology and Color Therapy: Citadel
Press.
Bolivar, V. J., Cohen, A. J., & Fentress, J. C. (1994). Semantic and
formal congruency in music and motion pictures: Effects on the
interpretation of visual action. Psychomusicology, 13(1), 28-59.
252
Brauchli, P., Michel, C. M., & Zeier, H. (1995). Electrocortical,
autonomic, and subjective responses to rhythmic audio-visual
stimulation. Int J Psychophysiol, 19(1), 53-66.
Cohen, A. (1999). The functions of music in multimedia: A cognitive
approach. In S. W. Lee (Ed.), Music, Mind, and Science (pp.
53--69). Seoul, Korea: Seoul National University Press.
Cohen, A. J. (2005). How music influences the interpretation of film
and video: Approaches from experimental psychology. Selected
Reports in Ethnomusicology: Perspectives in Systematic
Musicology, 12, 15-36.
Cupchik, G. C. (1995). Emotion in aesthetics: Reactive and reflective
models. Poetics, 23(1-2), 177-188.
DaumCommunications, e.-B. S. T. i. e.-M. m. o. o. (2007). UCC
Marketing Strategy Based on Moving Image UCC Promotion
Case Studies Paper presented at the 15th Internet Marketing
Forum, Posteel Tower, Seoul.
Davidson, R. J. (1995). Cerebral asymmetry, emotion, and affective
style. Brain asymmetry, 361?387.
Doppelmayr, M., Klimesch, W., Pachinger, T., & Ripper, B. (1998).
Individual differences in brain dynamics: important implications
for the calculation of event-related band power. Biological
Cybernetics, 79(1), 49-57.
Ebendorf, B. (2007). The Impact of Visual Stimuli on Music
Perception. Haverford College, Haverford, PA.
Field, B. (2007). The impact of visual stimuli on music perception.
Haverford College, Haverford, PA.
Freedberg, D., & Gallese, V. (2007). Motion, emotion and empathy in
esthetic experience. Trends in Cognitive Sciences, 11(5),
197-203.
Furnham, A., & Walker, J. (2001). The influence of personality traits,
previous experience of art, and demographic variables on artistic
preference. Personality and Individual Differences, 31(6),
997-1017.
Gabrielsson, A., & Lindstrom, E. (2001). The influence of musical
structure on emotional expression. Music and emotion: Theory
and research, 223?248.
Hagemann, D., Naumann, E., Becker, G., Maier, S., & Bartussek, D.
(1998). Frontal brain asymmetry and affective style: A conceptual
replication. Psychophysiology, 35(04), 372-388.
Iwamiya, S. (1994). Interactions between auditory and visual
processing when listening to music in an audio-visual context: I.
Matching II. Audio quality. Psychomusicology, 13(1-2), 133-153.
Jacobsen, T., Schubotz, R. I., Hofel, L., & Cramon, D. Y. (2006).
Brain correlates of aesthetic judgment of beauty. Neuroimage,
29(1), 276-285.
Jungmin, J. (2004). The Study for Expression of Motion Graphics
Using Color Emotion. The Graduate School of Ewha Womans
University, Seoul, Korea.
Kendall, R. A. (2005). Music and Video Iconicity: Theory and
Experimental Design. Journal of PHYSIOLOGICAL
ANTHROPOLOGY and Applied Human Science, 24(1), 143-149.
Levi, D. S. (1982). The Structural Determinants of Melodic Expres
Sive Properties. Journal of Phenomenological Psychology, 13(1),
19-44.
Marks, L. E. (1996). On Perceptual Metaphors. Metaphor and Symbol,
11(1), 39-66.
Ortony, A., Clore, G. L., & Collins, A. (1988). The Cognitive
Structure of Emotions: Cambridge University Press.
Rentfrow, P. J., & Gosling, S. D. (2003). The do re mi’s of everyday
life: The structure and personality correlates of music preferences.
Journal of Personality and Social Psychology, 84(6), 1236-1256.
Rosar, W. H. (1994). Film Music and Heinz Werner’s Theory of
Physiognomic Perception. Psychomusicology, 13(1-2), 154-165.
Sahng-hee, B. (2004). Study on Sensitive Expression in Motion
Graphics. The Graduate School of Ewha Womans University,
Seoul, Korea.
Sammler, D., Grigutsch, M., Fritz, T., & Koelsch, S. (2007). Music
and emotion: Electrophysiological correlates of the processing of
pleasant and unpleasant music. Psychophysiology, 44(2),
293-304.
Schlesinger, L. B. Physiognomic perception: empirical and theoretical
perspectives.
Schmidt, L. A., & Trainor, L. J. (2001). Frontal brain electrical
activity (EEG) distinguishes valence and intensity of musical
emotions. Cognition and Emotion, 15(4), 487-500.
Sirius, G., & Clarke, E. F. (1994). The perception of audiovisual
relationships: A preliminary study. Psychomusicology, 13,
119-132.
Sloboda, J. A., & Juslin, P. N. (2001). Psychological perspectives on
music and emotion. Music and emotion: Theory and research,
71-104.
Somsaman, K. (2004). The Perception of Emotions in Multimedia an
Empirical Test of Three Models of Conformance and Contest.
Case Western Reserve University, Cleveland, OH.
Tsang, C. D., Trainor, L. J., Santesso, D. L., Tasker, S. L., & Schmidt,
L. A. (2001). Frontal EEG Responses as a Function of Affective
Musical Features. Annals of the New York Academy of Sciences,
930(1), 439.
van den Bosch, R. J., Rombouts, R. P., & van Asma, M. J. (1996).
What determines continuous performance task performance?
Schizophr Bull, 22(4), 643-651.
Vartanian, O., & Goel, V. (2004). Neuroanatomical correlates of
aesthetic preference for paintings. NeuroReport, 15(5), 893.
Walker, M. (2002). The Power of Color: B. Jain Publishers.
Wapner, S. (2005). The Sensory-Tonic Field Theory of Perception.
Heinz Werner and Developmental Science.
Zettl, H. (1973). Sight, Sound, Motion; Applied Media Aesthetics.
253
... Therefore, concluding that design of the learning environment for engineering students are only dependent on their overall preference of negative aesthetics or user interface as a whole and not based on their gender or academic achievement. Thus, the integration of multimedia elements should be practical (Lee, Latchoumane, & Jeong, 2008) and catered to specific users' characteristics (Ahmad, 2012). ...
Article
Full-text available
e-Learning is changing the way students learn in the classroom. However, one of the least emphasised aspect in e-learning design concerns with aesthetics. Recent research in multimedia aesthetics highlighted the need to understand interaction from a multidisciplinary perspective. Aesthetics research in e-learning usually focuses on exploring the effects of positive aesthetics design towards neutral designs and a gap in exploring the effects of negative aesthetics. In this study, two different designs were developed to reflect positive and negative aesthetics designs. The cognitive outcome of these designs was compared and evaluated based on a learning achievement to measure comprehension. Gender and academic achievement were also explored to investigate if these factors had an effect on aesthetics perception and learning achievement. Based on the outcome of 95 electronic engineering students from two different polytechnics in Malaysia, it was found that engineering students performed better in the negative design in comparison to the positive design. In addition, genders or academic achievement differences were found not to influence the outcome.
Article
Full-text available
Using recent regional brain activation/emotion models as a theoretical framework, we examined whether the pattern of regional EEG activity distinguished emotions induced by musical excerpts which were known to vary in affective valence (i.e., positive vs. negative) and intensity (i.e., intense vs. calm) in a group of undergraduates. We found that the pattern of asymmetrical frontal EEG activity distinguished valence of the musical excerpts. Subjects exhibited greater relative left frontal EEG activity to joy and happy musical excerpts and greater relative right frontal EEG activity to fear and sad musical excerpts. We also found that, although the pattern of frontal EEG asymmetry did not distinguish the intensity of the emotions, the pattern of overall frontal EEG activity did, with the amount of frontal activity decreasing from fear to joy to happy to sad excerpts. These data appear to be the first to distinguish valence and intensity of musical emotions on frontal electrocortical measures.
Article
Full-text available
There is substantial informal and anecdotal evidence that music and moving images interact in powerful and effective ways, but until recently there has been virtually no systematic study of this relationship. The experiment reported here is a modification of the approach adopted by Marshall and Cohen (1988), using a semantic differential technique to measure subjects’ ratings of audiovisual combinations. The present study uses computer generated moving images and specially composed music to investigate the interaction of different visual and musical parameters. The results show that the effects of music on the rating of the visual images are essentially additive, and that no interactions between specific musical styles and particular visual images take place. This failure of specific audiovisual combinations to acquire particular semantic characteristics may be attributed to the simplicity of the visual images in the experiment. An approach to the interpretation of audiovisual combinations based on principles of ecological social perception is proposed.
Article
Full-text available
Two experiments explored audiovisual interactions when perceiving 40 video disc excerpts differing in degree of bimodal matching. For the 20 mismatched excerpts, the original relation between the audio and visual tracks was altered with respect to time or content. In Experiment 1, subjects rated audio and visual meaning on 22 scales in unimodal and bimodal contexts. They also rated the degree of matching of the audio and visual materials. Ratings of matching significantly differentiated the two sets of materials. Factor analysis for the unimodal and bimodal conditions produced a five-factor solution. Comparison of the results for matched and mismatched conditions implied an intention of balancing audio and visual meaning when the recordings were originally mastered for commercial use. In addition, for the higher-level factors of cleanness and uniqueness, audio meaning had a direct influence on visual meaning, but only for matched stimuli. For the lower-level brightness factor, the influence was independent of degree of matching. Thus, the phenomenon, termed consonance, depended on the degree of matching for higher-order factors. Cooperative interaction was also observed for the evaluative factor, for matched stimuli. In Experiment 2, audio materials were presented unimodally and bimodally with degraded (band-limited) quality as well as with original fidelity. Factor analysis revealed that video presentation compensated for negative effects of audio degradation independent of matching. Thus, the study revealed several kinds and levels of audiovisual interaction.
Article
Full-text available
Metaphors reflect processes of thinking and, consequently, appear not just in language but in perception as well. A notable example of metaphorical perception is synesthesia, in which music or voices may be perceived to have shapes, textures, or colors. Another example is physiognomic perception, in which dark objects may be perceived as gloomy or bright ones may be perceived as happy. Synesthetic perception and physiognomic perception alike reveal the presence of deep similarities across different sense modalities, and many of these similarities appear to be inherent to perception. In this regard, intersensory and physiognomic metaphors reflect "natural" rather than "conventional" symbols or signs. Despite their perceptual origin, however, metaphorical relations evident in synesthetic and physiognomic perception are also represented in and accessible through language. Consequently, in information processing tasks requiring people to identify or categorize perceptual stimuli, linguistic or abstract proces...
Article
Full-text available
It is proposed that the interpretation of action in film depends on the combination of semantic (i.e., meaning) and formal (e.g., temporal) information across auditory and visual channels. A series of experiments was conducted to examine the listener-viewer’s use of semantic and formal audiovisual information. In all experiments, subjects were presented with 32 combinations of music excerpts and video clips. The audio and visual materials had been selected for their friendly or aggressive meaning as judged by independent groups of listeners or viewers. In Experiment 1, subjects judged the semantic congruency of the 32 audiovisual pairs. In Experiment 2, they assessed temporal congruency and in Experiment 3 they assessed friendly-aggressive video meaning, In Experiment 1, mean judged semantic congruency was predicted by the prior independent friendly-aggressive judgments of the audio and visual components. In Experiment 2, subjects agreed significantly on the degree of temporal congruency of audio and visual components although the judgment was also influenced by semantic congruency. In Experiment 3, ratings of the friendliness-aggressiveness of the video excerpts were predicted by multiple regression of the a priori semantic ratings for the independent visual and audio components. The results were discussed in terms of contrasting models of the integration processes for semantic and formal audiovisual congruency proposed by Marshall and Cohen (1988) and Boltz, Schulkind, and Kantra (1991).