ArticlePDF Available

Musical sonification supports visual discrimination of color intensity

Taylor & Francis
Behaviour & Information Technology
Authors:

Abstract and Figures

Visual representations of data introduce several possible challenges for the human visual perception system in perceiving brightness levels. Overcoming these challenges might be simplified by adding sound to the representation. This is called sonification. As sonification provides additional information to the visual information, sonification could be useful in supporting the visual perception. In the present study, usefulness (in terms of accuracy and response time) of sonification was investigated with an interactive sonification test. In the test, participants were asked to identify the highest brightness level in a monochrome visual representation. The task was performed in four conditions, one with no sonification and three with different sonification settings. The results show that sonification is useful, as measured by higher task accuracy, and that the participant's musicality facilitates the use of sonification with better performance when sonification was used. The results were also supported by subjective measurements, where participants reported an experienced benefit of sonification.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=tbit20
Behaviour & Information Technology
ISSN: 0144-929X (Print) 1362-3001 (Online) Journal homepage: https://www.tandfonline.com/loi/tbit20
Musical sonification supports visual discrimination
of color intensity
Niklas Rönnberg
To cite this article: Niklas Rönnberg (2019): Musical sonification supports visual discrimination of
color intensity, Behaviour & Information Technology, DOI: 10.1080/0144929X.2019.1657952
To link to this article: https://doi.org/10.1080/0144929X.2019.1657952
© 2019 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group
Published online: 31 Aug 2019.
Submit your article to this journal
View related articles
View Crossmark data
Musical sonication supports visual discrimination of color intensity
Niklas Rönnberg
Division for Media and Information Technology, Linköping University, Linköping, Sweden
ABSTRACT
Visual representations of data introduce several possible challenges for the human visual
perception system in perceiving brightness levels. Overcoming these challenges might be
simplied by adding sound to the representation. This is called sonication. As sonication
provides additional information to the visual information, sonication could be useful in
supporting the visual perception. In the present study, usefulness (in terms of accuracy and
response time) of sonication was investigated with an interactive sonication test. In the test,
participants were asked to identify the highest brightness level in a monochrome visual
representation. The task was performed in four conditions, one with no sonication and three
with dierent sonication settings. The results show that sonication is useful, as measured by
higher task accuracy, and that the participants musicality facilitates the use of sonication with
better performance when sonication was used. The results were also supported by subjective
measurements, where participants reported an experienced benet of sonication.
ARTICLE HISTORY
Received 13 June 2019
Accepted 10 August 2019
KEYWORDS
Interactive sonication;
musical elements;
multimodality; visual
perception; visualisation
1. Introduction
Visualisation is a common way to present research data
and share research results with other researchers, as well
as with the public. It oers a way to communicate com-
plex relations in a single glance and is convenient for
data exploration. The primary goal of visual data
exploration is to support a user in formulating questions
or hypotheses about the data. These hypotheses may be
useful for further stages of the data exploration process,
such as cluster detection, important feature detection, or
pattern and rule detection (Simo, Bohlen, and Mazeika
2008). Seeing data visually also aids idea generation,
shows the shape of the data, possibly reveals correlations
between variables, and is a useful rst step in the analysis
process (Simo, Bohlen, and Mazeika 2008), but only if
the visual perception manages to convey the information
needed. Because, as complexity in the visual represen-
tation increases, interpretation becomes more proble-
matic and challenging. Apart from the sheer amount of
data on the visual display that might present a consider-
able diculty for an user, there are also challenges for the
visual perception that can impair comprehension of the
visual representation.
In order to facilitate visual analysis of large data sets
and to reduce visual clutter in the representation, it is
common to use transparency renderings based on data
density (see for example Artero, deOliveira, and Levko-
witz 2004; Ellis and Dix 2007). This is typically achieved
by rendering semi-transparent objects and additively
blending them together (see an example of a parallel
coordinates plot with transparency rendering of density
in Figure 1). This method can reveal structures and
relationships in data that otherwise would have been
missed. However, using transparency renderings for
density information might be challenging for the percep-
tion, for example when perceiving simultaneous bright-
ness contrast (Ware 2013) and when distinguishing
between brightness levels. Thus, these renderings make
it dicult to observe actual numbers of blended objects
for dierent areas in the density representation, as well
as making it hard to nd areas of similar density or
nd areas of highest density.
The challenges, such as distinguishing between bright-
ness levels, related to the inherent functions of visual per-
ception can never be overcome by visualisation alone.
However, they could be addressed by adding sound as
a complementary modality to the visual representation.
The combination of the visual and the aural modalities
should make it possible to design more eective multi-
modal visual representations, as compared to when
using visual stimuli alone (Rosli and Cabrera 2015).
Sonication, the transformation of data into sound,
can be used to supplement the visual modality when a
user studies a visualisation of data to further support
understanding of the visual representation (Kramer
et al. 2010; Hermann, Hunt, and Neuho2011; Pinch
© 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Accessarticle distributed underthe terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/),
which permits non-commercial re-use, distribution, and reproduction in any medium, provided the originalwork is properly cited, and is not altered, transformed, orbuilt upon in any way.
CONTACT Niklas Rönnberg niklas.ronnberg@liu.se
BEHAVIOUR & INFORMATION TECHNOLOGY
https://doi.org/10.1080/0144929X.2019.1657952
and Bijsterveld 2012; Franinovic and Seran2013). Tra-
ditionally, sonication is audication of data, where data
might be converted to a sound-wave or translated into
frequencies (Hermann, Hunt, and Neuho2011; Pinch
and Bijsterveld 2012). However, it could be questioned
to what extent this type of sonication is able to convey
information and meaning to a user. Going beyond plain
audication of data (Philipsen and Kjaergaard 2018),
sonication can be approached by deliberately designing
and composing musical sounds. Even though the con-
cept of sonication for data exploration is not new (see
for example Flowers, Buhman, and Turnage 2005),
there are few examples of studies that evaluate visualisa-
tion and sonication as a combination (see for example
Flowers, Buhman, and Turnage 1997; Nesbitt and Bar-
rass 2002; Kasakevich et al. 2007; Riedenklau, Hermann,
and Ritter 2010; Rau et al. 2015). These studies suggest
that there is a benet of sonication in connection to
visualisation; however, few studies explored the appreci-
ation of the sounds in the sonication or the use of musi-
cal sounds. Musical sounds are here referred to as
deliberately designed and composed sounds, based on
a music-theoretical and aesthetic approach.
Sonication using musical sounds is interesting as the
use of musical elements give good control over the design
of the sounds and enables the deployment of potentially
useful musical components such as timbre, harmonics,
melody, rhythm, tempo, and amplitude (Seashore 1967;
Deliege and Sloboda 1997; Juslin and Laukka 2004; Levi-
tin 2006). Previous studies have shown promising results
for the concept of musical sonication (Ronnberg and
Johansson 2016; Ronnberg, Lundberg, and Lowgren
2016; Ronnberg et al. 2016; Ronnberg 2017,2019). As
musical sounds are well adapted, at least on a more gen-
eral level, to conveying meaning, information, and
emotions (see for example discussions in Tsuchiya, Free-
man, and Lerner 2006;Ronnberg and Lowgren 2016),
musical sonication should be a fruitful approach to
multimodal information visualisation. However, despite
various research (see examples in Kramer et al. 2010
and in Hermann, Hunt, and Neuho2011) it is not
clear which musical elements, or combinations of musi-
cal elements, are most suitable to use in sonication.
1.1. Aims and objectives
The aim of the current study is to investigate the benet
of sonication using composed and deliberately designed
musical sounds compared to no sonication in the con-
text of information visualisation, to evaluate the useful-
ness (i.e. performance in terms of accuracy and
response time) of the sonication, and nally to explore
a possible eect of the users musicality on the benetof
sonication. Musical elements used are: (1) a combi-
nation of Timbre and amplitude, (2) Pitch, and (3) Har-
mony. These sounds are used to interactively sonify
intensity levels in visual representations containing gra-
dients to mimic a visualisation of data.
2. Method
To explore the usefulness of sonication, and dierent
musical elements in the sonication, an interactive test
using musical sonication was devised to investigate: (1)
which of three conditions with sonication would be
most eective in combination with the visual represen-
tations, and (2) to what extent self-assessed musicality
would aect the usefulness of the sonication conditions.
2.1. Creation of the visual representations
The visual representations (see examples in Figure 2 and
in Figure 3) were designed to mimic cutouts of a complex
visualisation of data such as transparency renderings
based on data density (as illustrated in Figure 1), and chal-
lenge the visual perception. Similar representations can be
Figure 1. An example of a parallel coordinates plot where transparency rendering is based on the data density is used. (Illustration
courtesy by Jimmy Johansson.)
2N. RÖNNBERG
found in a variety of research disciplines, ranging from
static social science data, via medical or climate change
data, to temporal air trac control data. As the perceptual
challenges arise due to shortcomings in the perception of
brightness levels (Ware 2013), it can be assumed that
similar diculties will be present in a visualisation with
gradient bands. The visual representations were created
in Matlab (R2016a) using a sine wave grating. This was
done by mixing sinusoids in dierent frequencies, with
an addition of low-level random ripples. A triangle wave
was then multiplied with the combined wave form to cre-
ate a peak level for the highest intensity level. Ten dierent
output wave forms were created in this way by circularly
shifting the elements in the array containing the sinusoids,
by the randomness of the ripples, as well as by varying the
slope and magnitude of the triangle wave. As the par-
ameters were changed within sets of ten wave forms, the
diculty level was balanced within a set of ten images.
A total of 90 images were then created. The wave form
was scaled to 8-bit integers, and the values of this grey
scale intensity map were linearly transformed to pixel
values in the green RGB channel and saved as 24-bit
RGB images in PNG format, ranging from no intensity
(black) to full intensity (pure green).
The green colour channel was chosen over red or blue
as the human visual perception is more sensible to con-
trasts in the green colour since green has higher perceived
brightness than red or blue of equal power (Smith and
Guild 1931;CIE1932). There are other colour models
better adapted to the human visual perception than
RGB, however, the use of the RGB colour model is motiv-
ated since the visual representations used in the present
study are monochromatic and intensity levels, rather
than hue or saturation, are mapped to the sonication.
2.2. Design of the sonication
SuperCollider (3.8.0) was used to create the interactive
sonication. SuperCollider is an environment and pro-
gramming language for real-time audio synthesis
(McCartney 1996,2002). In SuperCollider a synth
denition was created consisting of seven triangle
waves (see Figure 2), somewhat detuned around the fun-
damental frequency (6, 4, 2, +2, +4, and +6
cents). The sonication was then built up by eleven
tones, creating a C-major chord (ranging from C2 to
C8, i.e. 65.41 Hz to 4186.01 Hz). This chord was mixed
with pink noise at a low sound level to create a rich har-
monic content (i.e. the timbre of the sound), yet with a
pleasant harmonic content (similar to the musical
sound used in Ronnberg 2019). A demonstration can
be found here: https://vimeo.com/261447212
2.3. Mapping between musical and visual elements
The mapping between musical and visual elements was
designed to provide three dierent conditions with
sonication (Timbre, Pitch, and Harmony, see Figure 2
Figure 2. The structure of the experimental setup. The sound consisted of triangle waves and noise mixed together. Pitch, Harmony,
and Timbre were adjusted according to the visual representation before the sound was output to the participant.
Figure 3. One example of the visual representations used in the test setup, showing the complexity of the grating. The grey scale
intensity map were linearly transformed to the green RGB colour channel.
BEHAVIOUR & INFORMATION TECHNOLOGY 3
and Table 1) as well as a condition with no sonication.
The values of the grey scale map created in Matlab were
transformed to dierent parameters in the interactive
sonication.
The rst sonication setting changed the cutofre-
quency of a band-pass lter and the amplitude of the
sound (hereafter referred to as Timbre). A soft or dull
timbre is experienced as more negative compared to a
brighter timbre (Juslin and Laukka 2004). A more com-
plex timbre is more captivating with a greater
(emotional) response as a result compared to a simpler
timbre, and a louder sound is more activating and enga-
ging compared to a less loud sound (Iakovides et al.
2004) and perception of loudness is also mapped to
brightness via amplitude (Pridmore 1992). In this con-
dition, the sound passed through a second order band-
pass lter. The cutofrequency was mapped via a linear
to exponential conversion where the lowest intensity
level generated a cutofrequency of 100 Hz while the
highest intensity levels yielded a cutofrequency of
6000 Hz. The mapping between intensity levels in the
visual representation and the sonication was done line-
arly to exponentially and consequently the sonication
provided a higher level of information where the partici-
pant needed it the most to be able to provide an accurate
answer. The choice of linear to exponential mapping is
motivated by the fact that the human perception of
amplitude as well as frequency is nonlinear Everest and
Pohlmann (2015). After the band-pass lter, the sound
was mapped via a linear to exponential conversion,
where the amplitude level was almost completely attenu-
ated for the lowest intensity level, while there was no
attenuation for the highest intensity levels. Both these
musical elements, frequency content of the overtones
and amplitude, should provide potential sonic cues to
help solve the task in the test setup.
In the second sonication condition, the pitch of the
sonication was mapped to the intensity level (hereafter
referred to as Pitch). An ascending pitch is generally per-
ceived as more positive while a descending pitch is per-
ceived as more negative (Juslin and Laukka 2004), which
might correspond to the perception of brighter and dar-
ker areas in the visual representation (see for example
discussions in Bresin 2005; Palmer, Langlois, and Schloss
2016; Best 2017). Furthermore, higher pitched tones are
associated with lighter, brighter colours (Marks 1987;
Collier and Hubbard 2004; Ward, Huckstep, and Tsaka-
nikos 2006). The mapping between the intensity level in
the visual representation and the pitch of the sonication
was done linearly to exponentially for the same reason as
for the Timbre condition. At the darkest region in the
visual representation, the pitch of the sonication was
two octaves below the area with the highest intensity
level. Consequently, this sonication condition should
also be able to provide useful sonic cues for the test task.
The third sonication condition used dissonance of
the harmonic content of each tone in the sonication
(hereafter referred to as Harmony). A more complex har-
monic sound is more captivating for a listener compared
to a simpler harmonic sound (Iakovides et al. 2004), and
dissonant chords are experienced as more unpleasant
compared to harmonious major or minor chords (Palle-
sen et al. 2005). In this sonication condition, the tri-
angle waves creating each tone varied from seven tones
in unison (perfect pitch) in the area with the highest
intensity level to almost a halftone below and above
the fundamental tone (96, 64, 32, 0, +32, +64,
+96 cents) in the lowest intensity area. For the Harmony
condition the mapping was done linearly. As the harmo-
nic components are further apart in frequency in relation
to the fundamental frequency (as is the case in the darker
areas in the visual representation) the interference
between frequencies creates a beating (Winckel 1967).
The beat frequency is equal to the dierence in frequency
of the notes that interfere (Roberts 2016). Perception of
the two tones ranges from pleasant beating (when
there is a small frequency dierence) to roughness
(when the dierence grows larger) and eventually separ-
ation into two tones (when the frequency dierence
increases even more) (Sethares 2005). As a consequence,
the beating decreases in tempo as the harmonic com-
ponents come closer in frequency, and at the brightest
vertical pixel column the beating stops and all harmonic
components lock to the fundamental frequency. The
physical behaviour of the frequencies involved creates a
clear sonication clue that makes even small dierences
in harmonic content rather easily detectable. Conse-
quently, the harmonic complexity of the sonication
should provide sonic cues for the participants to solve
the tasks in the test.
2.4. Participants
For the present study, 25 students at Linköping Univer-
sity (14 female and 11 male) with a median age of 22
(range 1831) with normal, or corrected to normal,
vision and self-reported normal hearing were recruited.
No compensation for participating in the study was
provided.
Table 1. The mapping between the sonication and the visual
representation for the three sonication conditions.
Sonication condition Low brightness High brightness
Timbre Attenuated, more bass Louder, more treble
Pitch Low pitched tone High pitched tone
Harmony Much dissonance Perfect harmony
4N. RÖNNBERG
2.5. Experimental design and procedure
An interactive test was devised to explore a possible
benet of sonication (see Figure 2), and the eects of
the dierent sonication conditions. The test session
took 20 min at the most, and was initiated with learning
trials for familiarising and to reduce learning eects. The
learning trials consisted of all four sonication con-
ditions. After the training, the test was divided into
four parts according to the four sonication conditions
with 20 visual representations in each, and a short
break after each part where the participants answered a
questionnaire about the particular sonication con-
dition. The order of sonication conditions was balanced
between subjects to avoid order eects.
The participants moved a slider by using the compu-
ter mouse, to mark the vertical pixel column with the
highest brightness level in the visual representation,
and the sonication was adjusted according to the inten-
sity level for that pixel column (see Figure 3). The partici-
pants were asked to answer as quickly as possible. After
marking a vertical pixel column, the participants pressed
the large button beneath the slider and the next trial was
automatically initiated. The accuracy for each trial was
measured as the absolute dierence between the highest
intensity level in the visual representation and the par-
ticipants marked brightness level. Hence, a lower
measure was equal to higher accuracy. The response
time was also recorded. For the statistical analyses, the
overall accuracy was calculated as the mean error for
the 20 answers in each sonication condition, and the
response time was the mean response time for each
sonication condition. Accordingly, the experiment
yielded both objective measures of sonication, accuracy
and response time, and subjective measures from a
questionnaire.
SuperCollider was used on a MacBook Pro, presenting
visual stimuli on a 21′′ computer screen and auditory
stimuli via a Universal Audio Apollo Twin sound inter-
face through a pair of Beyerdynamic DT-770 Pro head-
phones. The headphones provided an auditory
stimulation of approximately 65 dB SPL. A quiet oce
was used for the test, and even if there were some ambi-
ent sounds, the test environment was deemed quiet
enough not to aect the tests conducted.
2.6. Questionnaire
A questionnaire was used to record subjective data to
complement the objective measures. In the beginning
of the test the participants were asked to rate their musi-
cality via a 5-point Likert scale from 1 (Not very exten-
sive) to 5 (Very extensive). After each sonication
condition (No sonication, Timbre, Pitch, Harmony)
the participant answered questions about the diculty
level they experienced in nding the brightest vertical
pixel column, and if they experienced a benet of soni-
cation (if sonication was used) in terms of accuracy and
response time. Answers in the questionnaire were given
ranging from 1 (Strongly disagree) to 5 (Strongly agree).
Finally, after a total of 90 trials, the participants answered
questions regarding if they experienced an overall benet
of the sonication or not.
3. Results
The participants were divided into two groups: Low
musicality (n=12) for ratings from 1 to 3, and High musi-
cality (n=13) for ratings 4 and 5. According to Kolmo-
gorovSmirnov tests the data were not normally
distributed, thus non-parametric tests were used to ana-
lyse the data. Bonferroni correction for multiple com-
parisons was applied as appropriate. Descriptive
statistics can be found in Table 2.
Accuracy was measured in terms of the mean errors
made. A Friedman test showed a signicant dierence
in mean errors between the four conditions
(
x
2(3) =38.57, p,0.001). Dunn-Bonferroni post-
hoc tests showed only signicant dierences between
No sonication and the three conditions with soni-
cation; Timbre (p=0.001), Pitch (p<0.001), and Har-
mony (p<0.001), where there were less errors when
sonication was used. However, no statistically signi-
cant dierences were found between the three conditions
with sonication. MannWhitney Utests showed signi-
cantly less errors for the High musicality group for Tim-
bre (U=41.5, p=0.046), for Pitch (U=23.5, p=0.002 ), and
for Harmony (U=20.0, p=0.001 ), but there were no sig-
nicant dierence between the groups for the No soni-
cation condition.
For response time, a Friedman test showed signicant
dierences between the four conditions
(
x
2(3) =42.81, p,0.001). Dunn-Bonferroni post-
hoc tests showed only signicant dierences between
No sonication and the three conditions with soni-
cation; Timbre (p<0.001), Pitch (p<0.001), and Har-
mony (p<0.001), where response time were longer
Table 2. Descriptive statistics with mean errors and response
time measurements (in seconds) for Low musicality and High
musicality. Standard deviation in parentheses.
No sonication Timbre Pitch Harmony
Low musicality 13.9 (5.4) 4.0 (4.1) 3.5 (2.0) 3.6 (2.2)
High musicality 11.4 (9.4) 2.2 (1.2) 1.3 (0.8) 0.7 (0.8)
Response time No sonication Timbre Pitch Harmony
Low musicality 4.7 (1.6) 10.4 (4.0) 11.9 (4.5) 11.7 (4.6)
High musicality 8.6 (3.2) 14.9 (7.0) 16.9 (9.1) 15.9 (7.3)
BEHAVIOUR & INFORMATION TECHNOLOGY 5
when sonication was used. There were no statistically
signicant dierences between the three sonication set-
tings. MannWhitney Utests showed signicantly
longer response time for the High musicality group in
the condition with No sonication (U=24.0, p=0.002),
but there were no signicant dierence between the
groups for the conditions with sonication.
The subjective measures from the questionnaire
showed that the participants generally experienced
sonication as helpful, see Figure 6. The median ranking
(1 =`Veryhardto 5 =`Veryeasy) for diculty in No
sonication was 2 (range: 14), and in Sonication it
was 4 (range: 35). These results suggest that the task
was experienced as easier with sonication than without.
The experienced diculty for No sonication as well as
for Sonication was similar for both groups (Low musi-
cality and High musicality). The experienced help from
sonication for improving accuracy was also measured
(1 =`Nohelpatallto 5 =`Muchhelp). The median rat-
ing was 5 (range: 45), which suggests that
the participants experienced a benet of sonication.
The experienced benet of sonication was high and
similar for both groups (Low musicality and High
musicality).
Finally, the experienced benet of sonication for giv-
ing a faster response was measured (1 =`Muchslower
to 5 =`Muchfaster), where the median rating was 4
(range: 15), suggesting that most participants experi-
enced that sonication supported them in giving faster
responses. There were some dierences between the
groups, which suggest that participants in the Low musi-
cality group generally perceived the sonication to sup-
port in giving a faster responses, while the High
musicality group had a more diverse impression.
4. Discussion
4.1. Accuracy
The results found in the present study suggest that soni-
cation can improve perception of colour brightness (see
Table 2 and Figure 4). The additional information intro-
duced by the sonication made it possible for the partici-
pants to improve their accuracy when the information in
the visual modality was insucient for giving an answer
with high accuracy. Consequently, the sonication sup-
ported the visual perception in the task. This interpret-
ation of the results was also supported by subjective
measurements. Results from the questionnaire suggested
that the diculty level of the task was reduced and that
the participants experienced the sonication as very
helpful in improving the accuracy of their answers.
4.2. Response time
Response time was found to be longer when sonication
was used, compared to the No sonication condition (see
Table 2 and Figure 5). This indicates that the participants
used the extra information provided by the sonication
to rene their selection in the test, reaching a higher
accuracy, and that this procedure took longer time.
Interestingly, when considering the subjective ratings
many of the participants stated that they experienced
sonication to improve their response time as well as
the accuracy, which was not the case according to the
measured response times. It might be hypothesised
that some participants believed that they performed the
task faster, as they might have experienced the compari-
son between areas in the visual representation easier
when sonication was used. Furthermore, as the amount
Figure 4. Generally, the mean error decreased when sonication was used. The High musicality group made less errors compared to the
Low musicality group.
6N. RÖNNBERG
of information was increased when sonication was
used, the task became more demanding from a percep-
tual perspective, which in turn made the participants
more deeply engaged in the task. In general, when some-
one is more engaged in a task, time is perceived to pass
more quickly (Conti 2001; Chastona and Kingstone
2004; Sackett et al. 2010). This well-known phenomenon
may be an explanation for the discrepancy between sub-
jective experience and objective measures with regard to
response times. The results show that sonication is use-
ful in terms of higher accuracy, but this comes at the
price of longer response times. It could be argued that
in situations where accuracy is more important than
response time, then sonication as used in the present
study is useful. For example when a researcher is explor-
ing an interactive multimodal visualisation for nding
relationships in the data to gain new insights in a
research question, or when sonication is used for clar-
ifying a user interaction in an educational situation.
4.3. The participantsmusicality
Even if the groups were small, musicality had eects
on the results. The High musicality group had statisti-
cally signicant higher accuracy in the conditions with
sonication compared to the Low musicality group
(see Table 2,Figures 4, and 5). The results suggest
that the High musicality group used their experience
and knowledge of musical sounds to reach a higher
accuracy. Interestingly, the High musicality group
had statistically signicant longer response times in
the condition with No sonication, further work
needs to be done to explore if this result is repeatable
and what the causes might be.
4.4. Sonication condition
There were no statistical signicant dierences between
the conditions with sonication (Timbre, Pitch, or Har-
mony). This indicates that regardless of the specic
musical element used in the sonication, and regardless
of the participants musicality, accuracy increased when
sonication was used. These results are promising, as
prociency in music theory should not be required to
hear the dierences in a sonication. However, when
studying mean and condence intervals for accuracy
there might be a trend discernible for the High musical-
ity group, where Harmony had higher accuracy com-
pared to Pitch, which in turn had higher accuracy
compared to Timbre (see Table 2 and Figure 4). A simi-
lar trend was not present for the Low musicality group,
where accuracy was more or less equal for all conditions
with sonication. This might suggest that for partici-
pants with high musicality, who knew what musical
cues to listen for, dierences in Harmony and in Pitch
provided stronger cues than Timbre. If this is the case,
the use of such musical elements benets participants
with higher musicality more, compared to participants
with lower musicality.
4.5. The visual representation and
experimental task
The visual representations used in the experimental
setup within the present study contained visual elements,
i.e. dierences in intensity levels, that challenged the per-
ception of brightness levels. The visual representations
used could therefore be seen as selections from a larger,
more complex and real visual representation used for
data exploration, where misconceptions due to
Figure 5. The mean response time was longer when sonication was used, and the High musicality group had longer response times
compared to the Low musicality group.
BEHAVIOUR & INFORMATION TECHNOLOGY 7
shortcomings in the visual perception could be a real and
relevant drawback in interpretation of the data. Conse-
quently, results found in, and the knowledge gained
within, the present study should be generalisable to
other visual representations as well.
Finding the highest brightness level in a data set might
be solved using a mathematical operation. However, this
is true if the user already knows what he or she is looking
for in the data. The task used in the present study can
therefore be considered a good simplication that
enabled the examination of musical sonication and
visual challenges in a controlled setting.
4.6. A possible learning eect
The highest intensity level was the same in all visual rep-
resentations used in the test setup (i.e. 255 in the green
RGB channel). Finding the brightest vertical pixel col-
umn could consequently have been facilitated by mem-
orising the Timbre, Pitch, or Harmony from the
previous trial and comparing it with the sonication in
the current trial. The echoic memory is the sensory
memory for sounds that have just been perceived (Carl-
son et al. 2009), and it is capable of storing auditory
information for a short period of time. The stored
sound resonates in the mind and is replayed for 3-4 s
after the presentation of auditory stimuli (Radvansky
2005). The echoic memory encrypts moderately primi-
tive aspects of the sound, such as pitch (Strous, Cowan,
and Ritter 1995). Thus, the echoic memory could help
in nding the brightest vertical pixel column, as this pos-
ition would sound rightin the participants mind. This
reasoning suggests that the learning eect, if present,
might thus have made the sonication to provide
additional information, as well as making a comparison
with the sound kept in memory possible. This can be
seen as something useful, as this suggests that with learn-
ing how to use sonication, performance can be
increased.
5. Conclusion
The present study evaluated the usefulness of soni-
cation as a complement to visual representations. The
results show that there was a benet of sonication, in
terms of increased accuracy, in selecting the vertical
pixel column with the highest colour brightness in the
visual representations. This suggests that sonication
facilitated perception of colour brightness, and helped
users overcome challenges for the visual perception in
the visual representations. This result was also supported
by the subjective measurements where an experienced
benet of sonication was reported. However, the use
and processing of the additional information provided
by the sonication took time, leading to a longer
response time when sonication was used compared to
the No sonication condition. This suggests that there
is a speed/accuracy trade-owhere the usefulness
might decrease in situations where fast response times
is of the essence. Finally, there was an eect of musicality
in the statistical analysis, where participants with higher
musicality had higher accuracy in the test conditions
with sonication.
6. Future work
For future work, further musical elements such as tempo
and rhythm would be interesting to explore. Also, the
Figure 6. The subjective measures indicate that sonication made the task more easy and was experienced as helpful, but overall not as
helpful for giving faster responses.
8N. RÖNNBERG
combination of musical elements such as amplitude and
pitch, or harmony and timbre, could be deployed to
investigate whether the combination could provide
even stronger sonic cues, and if it is possible to provide
dierent sonic cues simultaneously by using dierent
musical elements. The application of musical soni-
cation, using a musical theoretical approach, could also
be evaluated in relation to the more classical form of
purely data-driven sonication. These questions should
be evaluated in relation to standardised tests of the par-
ticipants musicality and music perception skills (Law
and Zentner 2001), to further explore to what extent
an individuals musicality aects the perception of
sonication.
Furthermore, it would be intriguing to evaluate soni-
cation support for a wider range of visual representations
and the use of real data, particularly with domain
experts, and in relation to (for example) the Visual Infor-
mation Seeking Mantra (Shneiderman 1996). Soni-
cation could be studied as a way of creating an
overview of an entire collection of data, or as a way to
support the examination of relationships among data
items. The information visualisation mantra provides a
scaold for further studies of the usefulness of soni-
cation in visual data exploration and information seek-
ing. Data for further studies could, for example, be
obtained from bio-sensors used in the medical sciences,
time cycles and activities in the social sciences, or climate
change data. The use of real data and dierent visualisa-
tion techniques would indicate which musical elements
in the sonication that are most suitable to use interac-
tively in combination with which type of visualisation
technique. These future inquires would generate an
understanding of the implications of the sonication
research, as well as suggest areas where sonication
would be useful as an additional tool in visual data
exploration.
Disclosure statement
No potential conict of interest was reported by the author.
ORCID
Niklas Rönnberg http://orcid.org/0000-0002-1334-0624
References
Artero, A. O., M. C. F. de Oliveira, and H. Levkowitz. 2004.
Uncovering Clusters in Crowded Parallel Coordinates
Visualizations.In Proc. IEEE Symposium on Information
Visualization INFOVIS 04,8188. Washington, DC:IEEE
Computer Society. doi:10.1109/INFOVIS.2004.68.
Best, J. 2017.Colour Design: Theories and Applications. 2nd ed.
Duxford: Elsevier Ltd., Woodhead Publishing.
Bresin, R. 2005.What is the Color of that Music
Performance?In Proc. International Computer Music
Conference (ICMC) 2005, 367370. San Francisco, CA:
International Computer Music Association.
Carlson, N. R., D. Heth, H. Miller, J. Donahoe, and G. N.
Martin. 2009.Psychology: The Science of Behavior. Harlow:
Pearson.
Chastona, A., and A. Kingstone. 2004.Time Estimation: The
Eect of Cortically Mediated Attention.Brain and
Cognition 55: 286289.
CIE. 1932.Commission Internationale de lEclairage
Proceedings, 1931. Cambridge: Cambridge University Press.
Collier, W. G., and T. L. Hubbard. 2004.Musical Scales and
Brightness Evaluations: Eects of Pitch, Direction, and
Scale Mode.Musicae Scientiae 8: 151173.
Conti, R. 2001.Time Flies: Investigating the Connection
Between Intrinsic Motivation and the Experience of
Time.Journal of Personality 69: 126.
Deliége, I., and J. Sloboda. 1997.Perception and Cognition of
Music. Hove: Psychology Press Ltd.
Ellis, G., and A. Dix. 2007.ATaxonomy of Clutter Reduction
for Information Visualisation.IEEE Transactions on
Visualization and Computer Graphics 13: 12161223.
Everest, F. A., and K. C. Pohlmann. 2015.Master Handbook of
Acoustics. 6th ed. New York, NY: McGraw-Hill Education
LLC.
Flowers, J. H., D. C. Buhman, and K. D. Turnage. 1997.Cross-
Modal Equivalence of Visual and Auditory Scatterplots for
Exploring Bivariate Data Samples.Human Factors 39:
341351.
Flowers, J. H., D. C. Buhman, and K. D. Turnage. 2005.Data
Sonication From the Desktop: Should Sound Be Part of
Standard Data Analysis Software?.ACM Transactions on
Applied Perception 2: 467472.
Franinovic, K., and S. Seran. 2013.Sonic Interaction Design.
Cambridge, MA: MIT Press.
Hermann, T., A. Hunt, and J. G. Neuho.2011.The
Sonication Handbook. 1st ed. Berlin: Logos Publishing
House.
Iakovides, S. A., V. T. H. Iliadou, V. T. H. Bizeli, S. G. Kaprinis,
K. N. Fountoulakis, and G. S. Kaprinis. 2004.
Psychophysiology and Psychoacoustics of Music:
Perception of Complex Sound in Normal Subjects and
Psychiatric Patients.Annals of General Hospital
Psychiatry 3: 14.
Juslin, P. N., and P. Laukka. 2004.Expression, Perception,
and Induction of Musical Emotions: A Review and a
Questionnaire Study of Everyday Listening.Journal of
New Music Research 33: 217238.
Kasakevich, M., P. Boulanger, W. F. Bischof, and M. Garcia.
2007.Augmentation of Visualisation Using Sonication:
A Case Study in Computational Fluid Dynamics.In Proc.
IPT-EGVE Symposium,8994. Germany, Europe: The
Eurographics Association.
Kramer, G., B. Walker, T. Bonebright, P. Cook, J. H. Flowers,
N. Miner, and J. Neuho.2010.Sonication Report: Status
of the Field and Research Agenda. Vol. 444, 129. Faculty
Publications, Department of Psychology.
Law, L. N. C., and M. Zentner. 2001.Assessing
Musical Abilities Objectively: Construction and Validation
BEHAVIOUR & INFORMATION TECHNOLOGY 9
of the Prole of Music Perception Skills.PLoS ONE 7:
115.
Levitin, D. J. 2006.This is Your Brain on Music: The
Science of a Human Obsession.NewYork:Dutton/
Penguin Books.
Marks, L. E. 1987.On Cross-modal Similarity: Auditory-
visual Interactions in Speeded Discrimination.Journal of
Experimental Psychology: Human Perception and
Performance 13: 384394.
McCartney, J. 1996.SuperCollider: A New Real-Time
Synthesis Language.In Proc. International Computer
Music Conference (ICMC), 257258. Hong Kong:
Michigan Publishing.
McCartney, J. 2002.Rethinking the Computer Music
Language: SuperCollider.IEEE Computer Graphics &
Applications 26: 6168.
Nesbitt, K. V., and S. Barrass. 2002.Evaluation of a
Multimodal Sonication and Visualisation of Depth of
Market Stock Data.In Proc. International Conference on
Auditory Display (ICAD),25. International Community
on Auditory Display.
Pallesen, K. J., E. Brattico, C. Bailey, A. Korvenoja, J. Koivisto,
A. Gjedde, and S. Carlson. 2005.Emotion Processing of
Major, Minor,and Dissonant Chords: A Functional
Magnetic Resonance Imaging Study.Annals New York
Academy of Sciences 1060: 450453.
Palmer, S. E., T. A. Langlois, and K. B. Schloss. 2016.
Music-to-Color Associations of Single-Line Piano
Melodies in Non-synesthetes.Multisensory Research 29:
157193.
Philipsen, L., and R. S. Kjærgaard. 2018.The Aesthetics of
ScienticData Representation: More Than Pretty Pictures:
Routledge Advances in Art and Visual Studies. New York:
Routledge.
Pinch, T., and K. Bijsterveld. 2012.The Oxford Handbook of
Sound Studies. Oxford: Oxford University Press.
Pridmore, R. W. 1992.Music and Color: Relations in the
Psychophysical Perspective.Color Research & Application
17: 5761.
Radvansky, G. 2005.Human Memory. Boston: Allyn and
Bacon.
Rau, B., F. Frieß, M. Krone, C. Müller, and T. Ertl. 2015.
Enhancing Visualization of Molecular Simulations using
Sonication.In Proc. IEEE 1st International Workshop on
Virtual and Augmented Reality for Molecular Science
(VARMS@IEEEVR 2015),2530. Arles: The Eurographics
Association.
Riedenklau, E., T. Hermann, and H. Ritter. 2010.Tangible
Active Objects and Interactive Sonication as a Scatter
Plot Alternative for the Visually Impaired.In Proc. 16th
International Conference on Auditory Display (ICAD
2010),17. Germany, Europe; International Community
for Auditory Display.
Roberts, G. E. 2016.From Music to Mathematics: Exploring the
Connections. Baltimore: Johns Hopkins University Press.
Rönnberg, N. 2017.Sonication Enhances Perception of
Color Intensity.In Proc. IEEE VIS Infovis Posters
(VIS2017),12. Phoenix, AZ: IEEE VIS.
Rönnberg, N. 2019.Sonication Supports Perception of
Brightness Contrast.Journal on Multimodal User
Interfaces 19. doi:10.1007/s12193-019-00311-0
Rönnberg, N., G. Hallström, T. Erlandsson, and J. Johansson.
2016.Sonication Support for Information Visualization
Dense Data Displays.In Proc. IEEE VIS Infovis Posters
(VIS2016),12. Baltimore, MD: IEEE VIS.
Rönnberg, N., and J. Johansson. 2016.Interactive Sonication
for Visual Dense Data Displays.In Proc. 5th Interactive
Sonication Workshop (ISON-2016),6367. Germany:
CITEC, Bielefeld University.
Rönnberg, N., and J. Löwgren. 2016.The Sound Challenge to
Visualization Design Research.In Proc. EmoVis 2016,
ACM IUI 2016 Workshop on Emotion and Visualization,
Linköping Electronic Conference Proceedings, Vol. 103,
3134. Sweden.
Rönnberg, N., J. Lundberg, and J. Löwgren. 2016.Sonifying
the Periphery: Supporting the Formation of Gestalt in Air
Trac Control.In Proc. 5th Interactive Sonication
Workshop (ISON-2016),2327. Germany: CITEC,
Bielefeld University.
Rosli, M. H. W., and A. Cabrera. 2015.Gestalt Principles in
Multimodal Data Representation.IEEE Computer
Graphics & Applications 35: 8087.
Sackett, A. M., T. Meyvis, L. D. Nelson, B. A. Converse, and A.
L. Sackett. 2010.Youre Having Fun When Time Flies: The
Hedonic Consequences of Subjective Time Progression.
Psychological Science 21: 111117.
Seashore, C. E. 1967.Psychology of Music. New York: Dover.
Sethares, W. A. 2005.Tuning, Timbre, Spectrum, Scale. 2nd ed.
London: Springer.
Shneiderman, B. 1996.The Eyes Have It: A Task by Data
Type Taxonomy for Information Visualizations.In Proc.
IEEE Symposium on Visual Languages, 336343.
Washington: IEEE Computer Society Press.
Simo, S., M. Bohlen, and A. Mazeika. 2008.Visual Data
Mining: Theory, Techniques and Tools for Visual
Analytics. Berlin, New York: Springer.
Smith, T., and J. Guild. 1931.The C.I.E. Colorimetric
Standards and Their Use.Transactions of the Ical Society
33: 73134.
Strous, R. D., N. Cowan, W. Ritter, and D.C. Javitt, 1995.
Auditory Sensory (ëchoic) Memory Dysfunction in
Schizophrenia.The American Journal of Psychiatry 152:
15171519.
Tsuchiya, T., J. Freeman, and L. W. Lerner. 2006.Data-To-
Music API: Real-Time Data-Agnostic Sonication with
Musical Structure Models.In Proc. 21st International
Conference on Auditory Display (ICAD 2015), 244251.
Graz, Styria: Georgia Institute of Technology.
Ward, J., B. Huckstep, and E. Tsakanikos. 2006.Sound-colour
Synaesthesia: To what Extent Does it Use Cross-modal
Mechanisms Common to Us All?.Cortex 42: 264280.
Ware, C. 2013.Information Visualization: Perception for
Design. 3rd ed. San Francisco: Morgan Kaufmann
Publishers Inc.
Winckel, F. 1967.Music, Sound and Sensation: A Modern
Exposition. New York: Dover Publications, Inc.
10 N. RÖNNBERG
... Multimodality can improve task-specific human performance in various contexts [9,10]. Sighted users' visualization experiences were enhanced with sonification [11]. Scientific visualization, where data variations are highly irregular to differentiate visually, sonification improved user perception [12]. ...
... Research findings demonstrated that visual learning materials augmented with audio feedback enriched learners' experiences [13]. Sonification facilitated visual perception and helped users overcome challenges in visual representations [11]. The human brain is naturally wired to combine different modalities into a unique perception while interacting with the real world [14]. ...
... Research findings demonstrated that sonification-based data representations could engage people emotionally and had the advantage of a deeper and richer understanding of data variations [29]. Users with higher musicality exhibited higher accuracy in interpreting sonified data tables [11]. In addition, popular music can help novice users understand subtle tonal differences produced by variations in auditory parameters. ...
Conference Paper
Full-text available
This research investigated audio-visual analytics of geoscientific data in virtual reality (VR)-enhanced implementation, where users interacted with the dataset with a VR controller and a haptic device. Each interface allowed users to explore rock minerals in unimodal and multimodal virtual environments (VE). In the unimodal version, color variations demonstrated differences in minerals. As users navigated the data using different interfaces, visualization options could be switched between the original geographical topology and its color-coded version, signifying underlying minerals. During the multimodal navigation of the dataset, in addition to the visual feedback, an auditory display was performed by playing a musical tone in different timbres. For example, ten underlying minerals in the sample were explored. Among them, anorthite was represented by nylon guitar, the grand piano was used for albite, and so on. Initial findings showed that users preferred the audio-visual exploration of geoscientific data over the visual-only version. Virtual touch enhanced the user experience while interacting with the data.
... In visual data analysis, interaction is essential for exploring the data [CGM19], and similarly, for sonification to be a useful tool, dynamic human interaction is necessary [HH04]. Sonification for accessibility of visualization for visually impaired users is an exciting approach and some of these findings could suggest interesting approaches also for sighted users in supporting visual perception with sonification [Rön19a,Rön19b], or reducing cognitive load [ZPR16] on the visual modality by sonification (see for example [MLS95]). ...
... However, this increase in accuracy was reported to be related to increased task completion times (see, for example, in [RJ16]). Similar effects have also been shown in other studies not included in this systematic review [MBKSM16,Rön19a,Rön19b]. The case might be that the use of the additional information provided by sonification that makes the increased accuracy possible takes a longer time to process and assess. ...
Article
Full-text available
The research communities studying visualization and sonification for data display and analysis share exceptionally similar goals, essentially making data of any kind interpretable to humans. One community does so by using visual representations of data, and the other community employs auditory (non‐speech) representations of data. While the two communities have a lot in common, they developed mostly in parallel over the course of the last few decades. With this STAR, we discuss a collection of work that bridges the borders of the two communities, hence a collection of work that aims to integrate the two techniques into one form of audiovisual display, which we argue to be “more than the sum of the two.” We introduce and motivate a classification system applicable to such audiovisual displays and categorize a corpus of 57 academic publications that appeared between 2011 and 2023 in categories such as reading level, dataset type, or evaluation system, to mention a few. The corpus also enables a meta‐analysis of the field, including regularly occurring design patterns such as type of visualization and sonification techniques, or the use of visual and auditory channels, showing an overall diverse field with different designs. An analysis of a co‐author network of the field shows individual teams without many interconnections. The body of work covered in this STAR also relates to three adjacent topics: audiovisual monitoring, accessibility, and audiovisual data art. These three topics are discussed individually in addition to the systematically conducted part of this research. The findings of this report may be used by researchers from both fields to understand the potentials and challenges of such integrated designs while hopefully inspiring them to collaborate with experts from the respective other field.
... Sonification can be used for data exploration and there are a number of studies that evaluate auditory graphs [30,57,60,79]. It has also been demonstrated that sonification can support visual perception [1,70,71], and various auditory channels can be successfully linked and related to visual channels [17,21,27,57,86]. Sounds, in sonification, can convey a multitude of information to listeners quickly [83], without adding visual clutter [13]. ...
Article
Full-text available
One of the commonly used visualization techniques for multivariate data is the parallel coordinates plot. It provides users with a visual overview of multivariate data and the possibility to interactively explore it. While pattern recognition is a strength of the human visual system, it is also a strength of the auditory system. Inspired by the integration of the visual and auditory perception in everyday life, we introduce an audio-visual analytics design named Parallel Chords combining both visual and auditory displays. Parallel Chords lets users explore multivariate data using both visualization and sonification through the interaction with the axes of a parallel coordinates plot. To illustrate the potential of the design, we present (1) prototypical data patterns where the sonification helps with the identification of correlations, clusters, and outliers, (2) a usage scenario showing the sonification of data from non-adjacent axes, and (3) a controlled experiment on the sensitivity thresholds of participants when distinguishing the strength of correlations. During this controlled experiment, 35 participants used three different display types, the visualization, the sonification, and the combination of these, to identify the strongest out of three correlations. The results show that all three display types enabled the participants to identify the strongest correlation — with visualization resulting in the best sensitivity. The sonification resulted in sensitivities that were independent from the type of displayed correlation, and the combination resulted in increased enjoyability during usage. Supplementary Information The online version contains supplementary material available at 10.1007/s00779-024-01795-8.
... Niklas Rönnberg is an associate professor in sound technology at the division for Media and Information Technology, Department for Science and Technology, Linköping University. His research interests is in the connection between sonification and visualization [24,25,28], as well as in sonification as a mean for communication in public spaces [29], and sonification for conveying emotion [26]. Furthermore, he teaches media technology courses with focus on sound and sound technology. ...
... People are found to more easily process visual information that is supported with sonification as it reduces visual clutter and is supportive in conveying information and meaning to the user Anxiety. The feeling of the 'response pressure', where the person is aware that their activity (such as 'seen' or 'delivered' like in Messenger or the double blue tick in Whatsapp) (as shown in Fig. 3) or their status (online/offline) can bring them to feel that they are required to answer and react immediately [50,65] Communication design ...
Chapter
Full-text available
Dating applications and dating sites are designed interventions that can change behaviour and influence user wellbeing. However, research from the design perspective around relation-making interventions is still scarce. This paper presents findings of a scoping review that aimed to collect current published knowledge on the influence of online communication on user behaviour, to understand its implications for relation-making. The study gathered findings from across disciplines to provide a holistic understanding of the various influences that online environment and interactions can have on user behaviour. Keyword combinations were run through five databases with a priori criteria and produced 1651 results published from the date range of 2016 to 2020. From the results, 717 abstracts were screened, and 82 papers were selected for full screening, out of which 46 were included for thematic analysis. The findings of the review show how interaction design and the online environment can influence user behaviour and thus impact how users form relationships. This scoping review is an initial study to provide an overview in a currently under-researched area. Its contribution is in presenting the needs and opportunities for future research and summarises the practical implications for interaction design that nurtures relationships.
Chapter
Multisensory visualization incorporating sight, sound, and touch can substantially enhance user interest and perception compared to unimodal vision-only applications. In a music-enhanced heatmap, color-coded rectangular bars become audibly distinct as they are assigned auditory parameters (i.e., pitch, tempo, and so on) depending on the data range. While navigating with a haptic device, music-enhanced bars in a heatmap would respond with varying audio feedback. Bars can be further assigned tangible properties (i.e., friction or stiffness) depending on variable values. This paper investigates the efficacy of immersive multimodal visualization that considers enhanced user experience. Research findings showed that a multimodal approach is more effective in improving overall user experience and engagement than traditional unimodal vision-based experience. If enhanced with virtual reality (VR), multi-sensory visualization can provide an immersive experience as users interact with and explore large datasets in a life-size environment. In addition, multimodal strategies can create a diverse, accessible, and inclusive environment.
Article
In recent years, there has been a growing trend towards taking advantage of audio--visual representations. Previous research has aimed at improving users’ performance and engagement with these representations. The attainment of these benefits primarily depends on the effectiveness of audio--visual relationships used to represent the data. However, the visualization field yet lacks an empirical study that guides the effective relationships. Given the compatibility effect between visual and auditory channels, this research presents the effectiveness of four audio channels (timbre, pitch, loudness, and tempo) with six visual channels (spatial position, color, position, length, angle, and area). In six experiments, one per visual channel, we observed how each audio channel, when used with a visual channel, impacted users’ ability to perform the differentiation or similarity task accurately. Each experiment provided the ranking of audio channels along a visual channel. Central to our experiments was the evaluation at two stages, and accordingly, we identified the effectiveness. Our results showed that timbre, with spatial position and color, aided in more accurate target identification than the three other audio channels. With position and length, pitch allowed a more accurate judgment of the magnitude of data than loudness and tempo but was less accurate than the other two channels along angle and area. Overall, our experiments showed that the choice of representation methods and tasks had impacted the effectiveness of audio channels.Graphical abstract
Article
Full-text available
In complex visual representations, there are several possible challenges for the visual perception that might be eased by adding sound as a second modality (i.e. sonification). It was hypothesized that sonification would support visual perception when facing challenges such as simultaneous brightness contrast or the Mach band phenomena. This hypothesis was investigated with an interactive sonification test, yielding objective measures (accuracy and response time) as well as subjective measures of sonification benefit. In the test, the participant’s task was to mark the vertical pixel line having the highest intensity level. This was done in a condition without sonification and in three conditions where the intensity level was mapped to different musical elements. The results showed that there was a benefit of sonification, with higher accuracy when sonification was used compared to no sonification. This result was also supported by the subjective measurement. The results also showed longer response times when sonification was used. This suggests that the use and processing of the additional information took more time, leading to longer response times but also higher accuracy. There were no differences between the three sonification conditions.
Conference Paper
Full-text available
This poster presents an interactive sonification experiment, designed to evaluate possible benefits of sonification in information visual-ization. The aim of the present study was to explore the use of composed and deliberately designed musical sounds to enhance perception of color intensity in visual representations. It was hypothesized , that by using musical sounds for sonification perception of color intensity would be improved. In this evaluation, sonification was mapped to color intensity in visual representations, and the participants had to identify and mark the highest color intensity, as well as answer a questionnaire about their experience. Both quantitative and qualitative preliminary results suggest a benefit of sonification, and indicate that sonification is useful in data exploration.
Conference Paper
Full-text available
This paper presents an experiment designed to evaluate the possible benefits of sonification in information visualization to give rise to further research challenges. It is hypothesized, that by using musical sounds for sonification when visualizing complex data, interpretation and comprehension of the visual representation could be increased by interactive sonification. This hypothesis is evaluated by testing sonification in parallel coordinates and scatter plots. The participants had to identify and mark different density areas in the representations, where amplitude of the sonification was mapped to the density in the data sets. Both quantitative and qualitative results suggest a benefit of sonification. These results indicate that sonification might be useful for data exploration, and give rise to new research questions and challenges.
Conference Paper
Full-text available
We report a design-led exploration of sonification to provide peripheral awareness in air traffic control centers. Our assumption is that by using musical sounds for sonification of peripheral events, it is possible to create a dynamic soundscape that complements the visual information to support the formation and maintenance of an airspace Gestalt throughout the air traffic controller's interaction. An interactive sonification concept was designed, focusing on one controlled sector of airspace with inbound and outbound aircraft. A formative assessment of the sonification concept suggests that our approach might facilitate the air traffic controller's work by providing complementary auditory information about inbound and outbound aircraft, particularly in situations where the traffic volume is moderate to low.
Chapter
Full-text available
This paper is an introduction to the emotional qualities of sound and music, and we suggest that the visual and the aural modalities should be combined in the design of visualizations involving emotional expressions. We therefore propose that visualization design should incorporate sonic interaction design drawing on musicology, cognitive neuroscience of music, and psychology of music, and identify what we see as key research challenges for such an approach.
Book
The Oxford Handbook of Sound Studies offers new and engaging perspectives on the significance of sound in its material and cultural forms. The book considers sounds and music as experienced in such diverse settings as shop floors, laboratories, clinics, design studios, homes, and clubs, across an impressively broad range of historical periods and national and cultural contexts. Science has traditionally been understood as a visual matter, a study which has historically been undertaken with optical instruments such as slides, graphs, and telescopes. This book questions that notion powerfully by illustrating how sounds have always been a part of human experience, shaping and transforming the world in which we live in ways that often go unnoticed. Sounds and music, the articles argue, are embedded in the fabric of everyday life, art, commerce, and politics in ways which impact our perception of the world. Through a diverse set of case studies, articles illustrate how sounds-from the sounds of industrialization, to the sounds of automobiles, to sounds in underwater music and hip-hop, to the sounds of nanotechnology-give rise to new forms listening practices. In addition, the book discusses the rise of new public problems such as noise pollution, hearing loss, and intellectual property and privacy issues that stem from the spread and appropriation of new sound and music-related technologies, analog and digital, in many domains of life.
Book
Given its importance in analysing and influencing the world around us, an understanding of colour is a vital tool in any design process. Colour design provides a comprehensive review of the issues surrounding the use of colour, from the fundamental principles of what colour is to its important applications across a vast range of industries. Part one covers the main principles and theories of colour, focusing on the human visual system and the psychology of colour perception. Part two goes on to review colour measurement and description, including consideration of international standards, approval methods for textiles and lithographic printing, and colour communication issues. Forecasting colour trends and methods for design enhancement are then discussed in part three along with the history of colour theory, dyes and pigments, and an overview of dye and print techniques. Finally, part four considers the use of colour across a range of specific applications, from fashion, art and interiors, to food and website design.
Article
The design of auditory formats for data display is presently focused on applications for blind or visually impaired users, specialized displays for use when visual attention must be devoted to other tasks, and some innovative work in revealing properties of complex data that may not be effectively rendered by traditional visual means. With the availability of high-quality and flexible sound production hardware in standard desktop computers, the potential exists for using sound to represent characteristics of typical “small and simpl” samples of data in routine data inspection and analysis. Our research has shown that basic properties of simple functions, distribution properties of data samples, and patterns of covariation between two variables can be effectively displayed by simple auditory graphs involving patterns of pitch variation over time. While such developments have implications for specialized applications and populations of users, these displays are easily comprehended by normal users with minimal practice. Providing further software enhancement to encourage exploration of data representation by sound may lead to a variety of useful creative developments in data display technology.