Conference PaperPDF Available

Are Loudspeaker Arrays Musical Instruments?

Authors:
  • Lab for Spatial Aesthetics in Sound Berlin (spæs)
Are Loudspeaker Arrays Musical Instruments?
Gerriet K. Sharma1, Frank Schultz2
1Institute of Electronic Music and Acoustics, University of Music and Performing Arts Graz
2Audio Communication Group, TU Berlin
Email: sharma@iem.at, frank.schultz@tu-berlin.de
Abstract
Compact loudspeaker arrays, such as the IKO (icosahedron, 20 loudspeakers) that use beamforming were introduced
for composition of electro-acoustic music with intentionally intense room excitation and interaction. It might be more
intuitive labeling such compact arrays as musical instruments than typical multichannel loudspeaker domes, which
often are referred to as technical tools within the compositional process.
In this paper we would like to discuss the different artistic and technical approaches and practices that come along
with sparse knowledge about listening habits and abilities of audiences in the current performance environments, when
using different loudspeaker arrays. By this we find criteria for the so called ’shared perceptual space’ (SPS). This shall
be the artistic-scientific field that conjuncts conflicting strategies to effectively guide future research efforts.
Der ”neue Ton” wird erst im letzten bestimmbar, wenn
das Problem der Verr¨aumlichung (...) abgehandelt sein
wird. [1, p.865]
I. Introduction
A still increasing amount of loudspeakers, being set up as
compact or distributed loudspeaker arrays, is nowadays
utilized for e.g.
3D audio, such as in medical and engineering research
and in (home)-entertainment in order to enhance the
auditory perception of (reproduced or synthesized)
spatial sound phenomena, often referred to as audio
immersion
large-scale sound reinforcement for large audiences
to enhance auditory perception of reproduced sound
phenomena
audio measurement applications, such as room
acoustics
sonic and audio-visual arts.
Typically all loudspeakers within such loudspeaker arrays
can be individually and electronically controlled by digital
signal processing. Thus, from a pure technical viewpoint
all applications deal with massive multichannel, highly
algorithm dependent reproduction of sound. It is up to
the mindset, which further roles the specific loudspeaker
array and its individual loudspeakers are playing within
the specific application.
Motivation
In this paper we would like to discuss some of the
mindsets and approaches of engineers, artists and listeners
regarding electro-acoustic music. More precisely, in this
contribution we are particularly interested in contem-
porary computer music that utilizes loudspeaker arrays
with the intention of composing media-specific spatial
sound phenomena. This is realized for example with
distributed arrays (such as the Acousmonium), with
distributed, audience surrounding arrays (such as an
Ambisonics dome) or with compact arrays (such as the
IKO).
A central and recently opened question of thinking
about instrumentality When (and why) is something a
musical instrument-and when (and why) is it not? [2,
p.9] should be taken into thorough consideration by all
involved parties for all types of loudspeaker arrays. The
technical and artistic aspects for partly answering this
question are elaborated in this paper.
State of Research: Musical Instruments
and Musical Instrumentality
While categorization of musical instruments initially had
difficulties embedding the very first sound producing
electronic apparatus (cf. the category electrophone
that was added in the 1940s to the classification [3]),
aesthetic strategies and self-evident utilization of technical
machines as musical instruments evolved from [1]—a
visionary statement on how electronically generated
spatialized music could impact on listeners: (...) ein
Klang fast sichtbar. (p.865)—towards a mature clas-
sification [4]. Recently, the question when and how
to define a musical instrument and the definition of
its instrumentality, in general and in computer music,
was newly discussed in [2, 5]. Both authors gather up
specific categories and aspects for defining a musical
instrument and musical instrumentality as such. While [5]
introduced the four categories reproducing,supporting,
generating and interaction, derived from literature and
artistic research, and then concludes that combining these
aspects is meaningful for practical classification of musical
instruments
1
, the article [2] discusses seven aspects of
musical instrumentality. These become partly utilized
within our short discourse in Sec. III.
Problem Statement
In the last decades, computer music artists were con-
fronted with new technology developed by engineering
approaches to utilize more loudspeakers with more
individual control and higher spatial resolution, cf. [6,7].
Within a short period of time, artists started to use
these systems, such as Ambisonics, Wave Field Synthe-
1http://microphonesandloudspeakers.com
sis (WFS), Vector Base Amplitude Panning (VBAP),
Distance-Based Amplitude Panning (DBAP) for their
own artistic intentions that where not always matching
these engineering concepts [8
10]. Moreover they often
encountered difficulties in the process to articulate and
verbalize the artistic ideas with the provided (often not
customized and rather technical) tools [11–14] .
Furthermore, contemporary computer music deals
with the question of performativity and liveness all
along [15
18]. This is strongly linked to the question
how exactly to define a musical instrument within the
(technically spoken) signal generating and processing chain
unit ’sound’, such that composed space evolves in an
environment.
Article Structure
The paper is organized as follows: In Section II the
common usage of loudspeaker arrays from engineering
and artistic viewpoints are revisited briefly. In Section III
prototypical viewpoints and habits are discussed within
and become embedded into the instrumentality discourse.
In Section IV we provide some line of thoughts and models
that might improve communication between different
users with different approaches and backgrounds. For
this the Shared Perceptual Space (SPS) seems to be
a meaningful tool for shared verbalization of different
approaches aiming at the understanding of the diversity
and enhancement of collective experience and expertise.
Section V concludes the paper.
II. Usage of Loudspeaker Arrays
For this contribution we simplify the utilization of
loudspeaker arrays into two models: (i) the reproducing
(technical utopia of sound field synthesis with ideally
infinite and practically sufficient spatial resolution) and
(ii) the creating (artistic utopia of a projection system
with infinite artistic freedom). It is a massive bipolarity
that we have to deal with since we are able to set up
massive multichannel loudspeaker setups. And these two
models clinch in practice, frequently.
Technical Utopia
Let us start with the technical utopia that would
be followed for perfect or convincing (whatsoever the
parameter are that specify this) sound field synthesis.
Virtual acoustic reality is a special application for
enhanced audio immersion, creating a simulation that
is technical identical or perceptually convincing to a
spatial sound phenomenon under reference. In 3D audio
engineering this is well known as either authentic (used
e.g. for hearing research, computer aided design of room
acoustics) or plausible (used e.g. for entertainment,
PR) holographic sound field reproduction. These are
approached with headphone based binaural synthesis
and with sound field synthesis deploying compact and
distributed, audience surrounding loudspeaker arrays
[19–21].
Surrounding loudspeaker arrays aim at the reproduc-
tion for an extended audience that should be located
in the—what is typically designated as—sweet area, i.e.
that area within the holography can be approached.
The influence of the reproduction space here is often
neglected, and typically free-field conditions are assumed
and desirable. When using compact loudspeaker arrays
for large-scale sound reinforcement (e.g. concert sound),
room interaction is often not desired, while for other
applications (e.g. measurement of room acoustics) ex-
citation of room reflections is mandatory. Completely
grating lobe free beam forming might be designated as
authentic radiation/directivity synthesis. Unless using
local sound field synthesis approaches for comparably
small sweet areas, authentic reproduction requires many
more loudspeakers and smaller spacing between them to
avoid spatial aliasing. Plausible reproduction requires
knowledge on how to create and design perceptually
convincing representations of real or artificial spatial
sound phenomena within the sweet area, cf. [13,22
28].
Perceptual quality assessments on different reproduction
methods on different array types are typically approached
for simple virtual source types and audio scenes, often
searching for correlations of technical quality measures
with perceptual ones. In the last decade many assessment
vocabularies were designed with special attention to
virtual auditory environments, cf. [27] for a review with
special focus on localization.
Example: Concert Sound in between Tech-
nical and Artistic Utopia
Referring back to the question of technical and musical
instrumentality, we might consider a recent sound re-
inforcement application for concerts in stadiums. The
artist explicitly requests to deploy an public address (PA)
cluster system from the 1980s to intentionally obtain its
implicating concert sound aesthetics. Most definitely,
this PA system was technical instrument engineering as
best as possible in the 1980s. And for certain, nowadays
line arrays are deployed for technically optimum large-
scale sound reinforcement. Thus, the requested 80’s-
PA is not longer the optimum choice, but rather now
could be considered as an artistic tool, that shapes the
sound in space...other than a modern line array does.
Very obviously, both loudspeaker array types could be
considered as technical as well as musical instruments,
depending on the way of thought. Nothing new here,
since the change of utilizing a purely technical instrument
intentionally and a subsequent ’abuse’ for other meanings
is common practice considering musical instruments, cf.
[4, 5].
Artistic Utopia
The following paragraphs consider the artistic utopia of
spatial sound generating systems with unlimited artistic
freedom. Computer music often utilizes loudspeaker array
designs and control methods that originate from the
engineering effort described above. This trend can even be
traced back to early electro-acoustic music compositions
and how the utopia was approached then, cf. Var`ese’s
Po`eme ´electronique and the Acousmonium. In the last
decade, the trend for increased usage of loudspeaker
arrays with increasing channels is observable [6,7], along
with common expectation that the engineering utopia
on sound field synthesis can be easily and meaningfully
utilized in the artistic composing process. That way
many Ambisonics and WFS pieces have been composed
with believing in loudspeaker-array agnostic performance
following the holographic approach. This could be
considered as the utilization of loudspeaker arrays as
technical instruments acting as transparent apparatus.
Reproducing
Simultaneously by following [5], this holographic approach
categorizes the loudspeaker array as a reproducing musical
instrument [5, p.22,38]. The concept of holography was
initially discussed as acoustic curtain by Steinberg and
Snow in 1934 (cf. [5, p.22f]) and was frequently approached
with the technical tools at hand at certain times. However,
the issues and problems reported in literature indicate
that the approach seems at least not so straightforward
as an artist might expect. Object-based methods for such
sound field synthesis applications are expected to solve
certain issues.
In [29] an agenda for interdisciplinary artistic/technical
(empirical) research on multichannel audio in electro-
acoustic music is sketched, which is still of fundamental
actuality. This is of special importance since the expec-
tation, perception and verbalization of a spatial sound
phenomenon might differ between artists and technicians.
The issue is rarely addressed in literature so far, cf.
[8
10, 30] for pioneering work on elaborating perceptual
aspects, that are known from engineering psychoacoustics,
for computer music composers. Furthermore, different
interface tools on controlling musical and technical
instruments for generating spatial sound phenomena
might be required, cf. [29].
Supporting, Generating, Interaction
Besides the reproducing aspect, van Eck’s further cate-
gories to define loudspeaker (arrays) as musical instru-
ment(s) are supporting,generating,interaction [5, Ch.2].
The supporting concept features loudspeakers as
musical instruments that spatialize the sound of e.g. a
Theremin, a Trautonium, a Ondes Martenot, an electric
guitar, i.e. there is a core technology that produces
vibrations by a traditional operating concept (i.e. playing
an instrument), but the involved loudspeakers exhibit
essential sound shaping features, cf. [5, p.38ff].
The generating concept features loudspeakers as
musical instruments where vibrations originate from
analog and digital electronics together with no traditional
instrumental playing operation necessarily be involved.
This concept features the initial dreams of Var`ese’s sound
producing devices [5, p.45ff]. Computer music using
(variable) directivity synthesis with compact loudspeaker
arrays [28] and columns might be easily affiliated with
this generating concept, cf. [14,31]. This also might hold
for distributed, audience surrounding loudspeaker arrays
used for sound field synthesis.
The interaction concept features loudspeakers as
musical instruments when an interaction between a
performer and the instrument can be recognized [5,
p.49ff]. Here the key idea is to spatialize electronic
vibrations with individual control of the loudspeakers.
Prominent examples for interaction with loudspeaker
arrays considering them as musical instruments are
the Acousmonium, the BEAST or the laptop orchestra
equipped with small hemispherical loudspeaker arrays [5,
p.131ff]. Typically, these approaches refer to as live
performances and often emphasize the usage of potentially
different types of loudspeakers. This category for instance
would allow for thinking of a compact loudspeaker array
acting as a (not-human) performer and the room being
the excited instrument.
In [5, Ch. 5] the author then argues that the four
aspects of a musical instrument definition for loudspeakers
and arrays should be considered coherently. Although
there might be obvious examples of single aspects (given
exemplarily above) the other remaining aspects will
apply as well. It is exactly this combination that makes
(...) loudspeakers unique in the field of music. To find
compositional strategies specific to (...) loudspeakers,
one should therefore not search for their potential to
act like musical instruments, but for combinations of
different approaches (...) resulting (...) in a piece that
is using the unique features of these devices [5, p.147].
This search inherently involves the concepts of gaining
experience by rehearsing, failure and success, all linked
to instrumentality concepts such as learnability and
playability, cf. [2].
III. Discourse on Musical Instru-
ments and Musical Instrumentality
The question whether a loudspeaker can be a musical
instrument is initially a question of utilization. As
been said before from the engineer’s point of view it is
reproduction tool. But taking the history of acousmatic
music and here particularly the development from the
Acousmonium [32] to the eventual BEAST [33, 34] we
understand that artists used and are using loudspeakers
as instruments, building loudspeaker orchestras and
developing techniques for spatialization of acousmatic
works. Mostly by projecting stereo but also multichannel
tracks with the help of mixing consoles or advanced
spatialization software and different kinds of loudspeaker
designs they are utilizing the acoustical features of the
respective devices. By this they are sculpting the sound
and its spatial proliferation in a way that might not
foreseen by its original builders [32,35, 36]. Recently there
have been more publications about the ’abnormal’ use of
loudspeakers and the question of when we call something
a musical instrument in the 21st century, cf. [5] already
mentioned in Sec. II.
Loudspeaker arrays constitute ”higher-order” repro-
duction tools in electro-acoustic engineering. As we
have discussed above, the underlying approach from
initial utopia to mathematical treatment, subsequent
research and iterative product development and psycho-
acoustical validation is sound field reproduction. In
the contemporary practice of electronic music e.g. in
universities’ experimental studios, in clubs and art
installation spaces, loudspeaker arrays are not only used
according to their developer’s initial intentions. There
are several examples that composers experience textures,
gestures and proliferations of sound that where not
intended at all [18,37
40] and these phenomena raise new
questions on reproducibility, inter-personal perception and
aesthetics [11]. We experience that the idea of the neutral
medium ends mostly where its existence as a musical
instrument or in a broader sense - as an artistic medium
begins: With the production of an - however generated or
staged ”Eigenklang” or its characteristic features as part
of an aesthetic strategy. [41, p.19] 2
Musical Instruments of the 21st Century—
Dealing with and Producing Space?
Sarah Hardjowirogo indicates that musical instruments
of the 21st century and those of earlier times differ in
many respects, be it their appearance, their technical
functionality, their playing technique, or their sounds [2,
p.9]. Although she produces an extraordinary research
on the pivotal positions to date and discusses them
in the light of the current performance situations, she
does not consider the different ways these instruments
might deal with or produce space. Central question for
Hardjowirogo is When (and why) is something a musical
instrument—and when (and why) is it not? [2, p.9]. To
encircle the field of possible criteria she is introducing
the notion of instrumentality. The specificity of musical
instruments as distinguished from other sound-producing
devices is expressed by a concept of instrumentality, which
seems to be a gradeable and dynamic concept that is not
tied to an object per se but is rather a matter of cultural
negotiation. More precisely, it denotes the potential for
things to be used as musical instruments or, yet differently,
their instrumental potential as such. [2, p.17] Consequently,
an object is not per se a musical instrument (ontological
definition) but it becomes a musical instrument by using
it as such (utilitarian definition). [2, p.10]
Therefore it seems to be necessary to turn the focus
away from the instrument as a material object and look for
its immaterial features. So we have to ask, what it means
to use something as a musical instrument. What are the
actions typically associated with musical instruments?
What, other than that, constitutes a musical instrument
as such? And we shall add the question: How or when
does the contemporary audience understand these actions
as instrumental in a musical sense? To date, the role
of audience perception has not received much attention
in the study of contemporary instrumental performance,
cf. [2, 42].
Croft [17] states that the perception of instrumentality
is directly connected with the perception of liveness. Ever
since Philip Auslander’s 1999 book on liveness [43], the
term has become increasingly popular and still inspires
a significant amount of works in the field. Lately, there
has been a number of studies to capture the perceived
2
Die Rede vom neutralen Medium der Wiedergabe endet nach
aller Erfahrung dort, wo seine Existenz als Musikinstrument oder
im weiteren Sinne als k¨unstlerisches Medium beginnt: mit der
Fokussierung eines wie auch immer generierten und inszenierten
Eigenklangs oder seiner dispositiven Eigenschaften als Teil eine
¨asthetischen Strategie. [41, p.19]
liveness of digital musical instruments, e.g. [44–46].
Thus, we can assume that instrumentality must not
be conceived as a constant, but rather a gradeable,
dynamic term which means that an object may be more
or less instrumental, according to its expression of the
characteristics associated with instrumentality.
Hardjowirogo thoroughly lists criteria that repeatedly
appear in literature and practice and thus could play a
major role for the subsumption of an object as a musical
instrument (with instrumentality). However, it is not
necessary that an object fulfills all the criteria to function
as an musical instrument in the end. We will discuss them
sequentially in the following.
1. Sound production
The musical instrument defining aspects reproducing and
generating of [5] could be linked here. A most obvious
criterion, but one has to keep in mind that in digital
music the instrument’s sound is not an immediate result
of the sonic characteristics of a material object anymore,
as is the case with traditional instruments. Computer,
software and interfaces shape the sounding result but
only with the loudspeaker as the producer of sound fields,
spatialized music can be perceived as such.
Moreover, using loudspeakers in our daily practice
we know that every speaker has its own sound, shaping
the output of the signal that is turned into sound waves.
Even in the Ambisonics or WFS domain we notice severe
differences of the sound product between concert venues
and studios using different loudspeaker brands and devices.
Especially adapting a composition from a hemispherical
dome to an IKO shows how immensely different the ’same
sounds’ sound because of different decoding, spatialization
and therefore filtering and spatial propagation. The same
piece sounds different as we use different loudspeaker-
instruments for the production of its sounds.
2. Intention/Purpose
Typically, we think of instruments as discrete, self
subsisting material objects, intentionally crafted for the
purpose of making music by performing musicians. [16,
p.38]
By discussing what role the aspect of intention or
purpose plays for instrumentality, we find numerous
examples for instruments that have not been designed
as such originally but still involve some kind of human
intention, namely the intention to use the object as a
musical instrument (e.g. cow bells, saws, sirens) [47].
Especially in electronic music we often find that sound
producing or altering devices such as sine wave generators,
tape machines, record players and laptops where originally
not designed for music [4], but found their way into the
apparatus of electronic music concerts, laptop orchestra
performances and installation art.
Intention and purpose are quite decisive features for the
construction of instrumentality in that playing a musical
instrument always requires both the intention to do so and
the purposeful use of something (that may also have a
different original purpose) as a musical instrument. [2]
As mentioned above, we can observe an ongoing
cultural practice to use loudspeaker arrays with the
intention to create three-dimensional gestural [11,38, 40]
and sculptural [14,32,48] sound phenomena that perform
space as such. The musical instrument defining aspects
generating and supporting of [5] might be linked here.
3. Virtuosity/Learnability
Both learnability and virtuosity involve the opportunity to
improve one’s playing skills through exercise. In a broader
sense this means that the higher the impact of practising
an instrument, the higher its degree of instrumentality. [2]
But do we have to witness as an audience the process of
playing skill’s as manual actions or can we experience how
much someone has learned within the practicing process,
say in the interdependent web of timbre, placing of sounds
and their movements in space? Auslander states that, at
least in professional instrumental performance, playing
an instrument should appear more difficult than pressing
a play button [43]. Consequently, a fixed media concert
played back with a loudspeaker array is not using the
loudspeakers as an instrument because no human physical
action can be perceived over a period of time.
Taking the degree of virtuosity to another level, [49,
p.307] declares, [v]irtuosity also means the possibility
to bypass some kind of impossibility [. . . ], to go beyond
reality, to cheat triviality.
With loudspeaker arrays we are able to form flexible
spatial objects not only as the mere output from a chain
of membranes, we are able to compose spatial sound
entities that are almost impossible to create with any other
instrument. Experiencing the IKO in particular, first-time
listeners describe sculptural sound phenomena that they
have not been able to imagine before and even specialists
in the field experience spatial events that are hardly
describable by common terminology [30,42, 50
52]. Hence
we could say that by creating musical sound objects in
time and space, today we bypass the actual impossibility
of moving ’sound-masses’,’planes’, ’discernible beams of
sound’ and ’zones of intensities’ as Edgar Var`ese where
formulating his utopian music in the 1930s [53]
3
long
before the actual technical means where at hand.
Consequently, it is not the pressing of a play button
but the effort of concentrated mind, bodily experience,
intuition and practice that are set to unfold and make
spatial sounds performing the piece. Virtuosity has then
shifted from bodily effort to the knowledge and spatial
practice of a deepened interdisciplinary scientist-artist
collaboration as an act of mutual translation of different
languages, technical abilities and socializations before the
actual performance, digitally stored to be later performed
within the ’virtuous’ interplay of loudspeaker arrays
projecting sound and the conditions of the listening space.
3
Today, with the technical means that exist and are easily
adaptable, the differentiation of the various masses and different
planes as well as these beams of sound could be made discernible
to the listener by means of certain acoustical arrangements [...]
[permitting] the delimitation of what I call zones of intensities.
[...]these zones would be differentiated by various timbres or colors
and different loudnesses. [they] would appear [...] in different
perspectives for our perception [...] [they] would be felt as isolated,
and the hitherto unobtainable non-blending [...] would become
possible.
Especially when it comes to digital instruments, the
learning process can be quite different from that known
from traditional instruments as learning procedures and
playing techniques are not yet standardized and often
must be developed first. Furthermore the visuality of
sound production, for example on screens is not able to
represent the sonic output especially when it comes to 3D
sound phenomena and often even fail the perception [54].
So virtuosity in electronic music—if necessary for its
instrumentality at all—can be found in the way experience
in the use of loudspeaker arrays can be experienced by the
audience in a concert situation especially when it comes
to difficult spatial differentiations and definitions .
4. Playability/Control/Interaction
Both playing and controlling a common instrument
involve immediacy regarding the connection between the
instrumentalist’s actions and the instrument’s sound, but
they differ in the degree of agency they ascribe to the
instrument. Traditionally we would say that interaction
can be understood as a concept of instrumental play that
ascribes as much agency to the instrument as it does to
the performer, cf. [2, p.19]. But today there is another
connection of actions at work namely the interaction
between artists and engineers. These connections may
function on a lose mutual information based collaboration
or as a close development cooperation. In any case
and degree they are always present in the community
often shifting the weight of agency between the two
parties. Especially in the field of interface design levels of
interaction are still in debate [55]. Loudspeaker arrays in
a concert situation, without a human performer present,
do of course not feature playability and need no physical
control. In computer music the role of the performer as
the one in control for an object is questioned from the
very early stage until today. In this context, the idea
of musical instruments having their own agency [56] has
become a popular and much-discussed topic, in artistic
programs [57,58] as well as in theoretical discourses [59,60].
But there is hardly any research on the audiences’ side
how much of this is actually important to understand an
object as a musical instrument in a performance situation.
Using loudspeaker arrays it is the interaction with the
spatial attributes of the listening space that gives the
sound production a unique form that can be experienced
by the audience on very subtle levels.
5. ”Immaterial Features”/Cultural Em-
beddedness
Cance et al. especially refer to the importance for a
new instrument to take up existing aesthetic practices.
[61, p.21]. We showed that the forming of three-
dimensional sound objects became a desideratum of
composition in spatialized computer music within recent
years. Over the past 25 years so called virtual reality
became a subject to much debate across all disciplines
from science, to arts and new media seeming to be
a field of social and political concern [62, 63]. For
that, again the question is raised what we actually can
experience by creating technical environments that feature
instrumentality beyond reproduction.
Not only do electro-acoustic composers have the
freedom to design sounds that specifically support spatial
effects, but they can also explore ways of creating sounds
that have no obvious analog in the physical world. [8]
Loudspeaker arrays do exactly fit in this debate being
one of the first musical instruments that can create sound
formations that can only exist in its artificial environment
and may or may not do refer to forms and constellations
outside its design.
[64, p.274] has pointed out that the value and meaning
[of musical instruments is] negotiated and contested
in a variety of cultural arenas and that, apart from
studying its physical functionality and its location in the
organological system, an instrument’s identity cannot be
fully understood without studying the cultural contexts
in which it is embedded. In electronic music the idea of
instrument and its instrumentality has changed drastically
within the past 70 years [4, 55]. Moreover we can say
that electronic music reflects the way our live became
more and more mediatized and mediated by technical
environments [43,65] by using everyday technical objects
not intentionally made for the production of music as
instruments like record players, tape machines or laptops.
Thus the focus lies more on the unique sounds that can be
produced than on the fitting of the object in the tradition
of instrument-making, its playability or controllability
or the common expectation that one has to train and
rehearse these instruments like a violin or the piano
for years. Mastering an instrument today can mean to
define the properties of an object for its use in a musical
digital environment and to contextualize this object as a
musical instrument as well as staging the instrument in a
meaningful way for an audience.
6. Audience Perception/Liveness
Paul Sanden proposes a theoretical framework for under-
standing how the concept of liveness is active in the
creation of music’s meaning, especially although not
exclusively at an aesthetic level [66]. The term still carries
with it a defining connection to unmediated musical
performance along with the aesthetic and ideological
values associated with that performance [2, p.4]. For
Sanden liveness is not a fixed ontological state that
exists in the absence of electronic mediation, but rather
a dynamically performed assertion of human presence
within a technological network of communication.
As in most of music projected with loudspeaker
arrays human presence is not part of the performance
situation but especially in spatialized electronic music
gesture plays a vital role [67]. Denis Smalley claims
in his groundbreaking and widely acclaimed article on
Spectromorphology:When we hear spectromorphologies
we detect the humanity behind them by deducing gestural
activity, referring back through gesture to proprioceptive
and psychological experience in general [37, p.111].
Thus we could subsume that although there is no
human activity caused by a human being we may expe-
rience spatialized electro-acoustic music as dynamically
performed assertion of human activity. Unfortunately
there has not been a lot of research in this field of
perception although this quote by a highly distinguished
composer and musicologist in this field made its way
into science and teachings [40, 68] underlining that
experiences from compositional and performative practice
gives ground for such conclusions.
For Dugal McKinnon loudspeakers in acousmatic
music are typically used to create vibrant aesthetic
experiences in the absence of live performers and any
significant visual element. Such acousmatic contexts,
while not conventionally live, use sonic immersion, spatial
articulation of sound and the experience of sound as
invisible matter to create a unique form of liveness. He
detects a historical shift and therefore a radical change of
the idea of musical performance Loudspeaker music shifts
the centre of gravity away from the performer and towards
the listener, reconstituting liveness as listener-determined.
[18, p.269]. For Mc Kinnon liveness of loudspeaker music,
particularly in immersive sonic environments, emerges in
the interaction of sound, space and the somatic, affective,
and interpretative activity of the listener. In his opinion
this can happen only in the absence of performer and
performance, and in the presence of the loudspeaker.
Such liveness is both singular and radical, particularly
within a contemporary cultural context dominated by
multimedia, whether spectacular or mundane.Yet the loud-
speaker is always a broken tool, its visual-physical presence
undermining the very audialimmaterial experiences it
creates, even as this deficient object magically propagates
qualitative abundance, ontological ambiguity and somatic
presence. [18, p.270].
Following the arguments of both Auslander and Croft
[17, 43], instrumentality in the sense of a category that
legitimates instrumental performance is highly dependent
on audience perception. But to date, the role of audience
perception has not received much attention in the study
of contemporary instrumental performance.
7. Interaction with and Production of
Space
Interestingly, Hardjowirogo and non of the authors above
consider the interaction of an object with its environment
as a criteria for instrumentality. Only David Burrows
states that Instruments raise questions [...] about the
control of the spaces around and between us [69]. Every
musical instrument is changing the experience of space
by its very own wave propagation pattern. For some
space theorists of our time e.g. [70
72] space is not
static and depends strongly on the perception of the
individual. Movement, alteration of spatial constellations
and staging of entities create spaces (and sometimes more
then one) in relation to a given place. We could say that
loudspeaker arrays actually produce their own spaces
within acoustic environments. In electro-acoustic music,
the acoustic experience has often been a reference point,
but the technology of electronic reproduction expands
the scope and complexity of spatiality in a radical way.
Even though the apparatus may be located within a
physical space and even though our spatial hearing has
developed within a physical world, electronic reproduction
creates the potential for an art of spatiality [8]. Within
loudspeaker music, space became a parameter of music
itself. Thus space is part of the musical production
process [14, 32, 65, 73, 74] though the artistic strategies
are still vague and not very well documented. In 1991
Marco Stroppa wrote: Even supposing that all of the
scientific and technological difficulties are resolved, it
remains unclear as to how we can organize space in its
diverse meanings as a musical material [11]. And Barret
states: We are far from a complete understanding of
spatial ontology. Little has been theorised on an aesthetical
level in terms of three-dimensional sound projected over
loudspeakers and removed from visual causation, nor have
ideas which in the stereo field are restricted to concept been
significantly explored in the reality of 3D [75]. Though
the concept of spatialization or spatialization of sound
is not used unambiguously, it is generally assumed that
we can distinguish between two basic traditions: the
loudspeakers and the concert hall can be understood as
the environment of the composition, or the loudspeakers
and the surrounding space become the vehicle to create
certain spatial sound phenomena [50]. In any ways
loudspeaker arrays can be used to create spatial sound
phenomena and sounding spaces within the concert hall
and therefore interact with their environment as most
musical instruments do.
IV. Discussion
Working with loudspeaker arrays composers of electro-
acoustic music are researching how they can still detect
potential for aesthetic experiences and make them useful
for the sonic arts in the present. It seems unavoidable
that this question causes them to fall back on their senses
using their ears and the intellectual reflection of what
is experienced and experienceable on location with the
instruments at hand. As shown, a specific cultural context
arises in the perception situation of the acousmatic
paradigm. What is conceptualized and experienced in
the studio or laboratory before is not played back in the
performance in the sense of a discretization of the former.
Rather, the present is made differently experienceable.
But still much has to be done, cf. [1].
The more that we understand about the complex rela-
tionship between spatial sound systems and the listener’s
spatial thinking, the better we will be able to harness
the capacities of such systems for artistic purposes. [54,
p.229] and further Kendall points out that, conceptual
terminology can be out of alignment with the technical
capacities of spatialisation systems, out of alignment with
the actual experience of spatial sound and therefore lead
to under-utilization in electroacoustic composition [76].
Shared Perceptual Space
Therefore we propose to work with the concept of the
Shared Perceptual Space (SPS) [14,42], a research topic
within the artistic research project ”Orchestrating Space
by Icosahedral Loudspeaker” (OSIL, PEEK AR-328-G21
funded by the Austrian Research Promotion Agency).
Incorporating artistic experience and psychoacoustic
research, this project conducts listening experiments
that provide evidence for a common, inter-subjective
perception of spatial sonic phenomena created by the
IKO. The experiments where designed on the basis of a
hierarchical model of spatio-sonic phenomena that exhibit
increasing complexity, ranging from single static sonic
objects to combinations of multiple, partly moving objects.
The results explore new compositional perspectives in
spatial computer music, cf. [30,42].
Recent international studies reveal, that we are not
talking about a sideshow here. Spatialization, the
synthesis of spaces and spatial properties of sounds for
a listener, is a growing field of interest for researchers,
sound engineers, composers, and audiophiles. Due to
broad and diverse viewpoints and requirements, the un-
derstanding and application of spatial sound is developing
in many ways. To benefit from varying viewpoints,
individuals involved in artistic practice and those involved
in theoretical or applied research need to engage in regular
dialogue [7]. If we want to make a more artistic use of
loudspeaker arrays we need to establish a close exchange
of knowledge, experiences and concepts between engineers,
composers and audiences to understand more of the
variety of experiences possible and the practical and
conceptual limitations of our technical environments,
as [29] was already calling for a decade ago.
There are at least three reasons why the spatial
potential of electroacoustic music is not always realized:
1) misconceptions about the technical capacities of spa-
tialization systems, 2) misconceptions about the nature of
spatial perception, especially in the context of such systems,
and 3) a lack of creative engagement, possibly due to the
first two issues [8]. It is therefore a matter of finding
parameters for an inter-subjective space for the perception
of three-dimensional sound phenomena. For the composer,
the question arises to what extent a communicable
composition of plastic sound objects is conceptually,
theoretically and at all practically possible when faced
with changing technical conditions, architectural space
situations, different room descriptions, and perceptions.
Only without ’virtuous’ interdisciplinary work in this
field this fundamental question will remain not answered
and the potential of these systems will stay in ’earliest’
stages. Moreover we need a vivid discussion in the light
of a mediatized world and of virtual reality to which
concept of ’reality and authenticity’ we actually refer
while working with complex loudspeaker arrays: Reality
is as much about aesthetic creation as it is about any other
effect when we are talking about media. [77, p.241].
Over the last 15 years the technical equipment of
composers has improved both in quality and quan-
tity, with sound spatialization based on five or more
loudspeaker channels being increasingly preferred over
traditional two channel stereo systems [6]. It seems that
within the last 80 years the technical possibilities came
quite close to the utopian ideas that artists like Edgar
Var`ese expressed when he made his famous statement
in 1936(!): Today, with the technical means that exist
[...]. Moreover loudspeaker arrays are not part of a
subculture or still bound to academical funding. Even
loudspeaker instruments like the IKO by IEM and sonible
found there way into production and can be purchased
for entertainment, that we can say [...] the spatialization
equipment and technology have become readily available
[78, p.17].
However to date with the different formats existing,
projection techniques and devices, object- and channel-
based reproduction, software tools, spatial concepts
explained and discussed it is virtually unresolved what the
different listening groups (engineers, musicians, audiences)
hear where in the created space, how they experience
plastic sound objects and how they would describe them
for themselves. Furthermore we have learned that musical
instruments are complex, culturally freighted artifacts
allowing for particular ways of interaction that result in
particular sounds, cf. [2, p.22]
V. Conclusion
Now, back once again to whether loudspeaker arrays are
musical instruments? Of course.
For the sheer playback or reproduction approach it is
a tool, maybe an advanced one, fulfilling a simple purpose
to reproduce what was formerly developed, composed,
recorded somewhere else. But different motivations in the
approach towards spatiality and space in music and sonic
art show that the mindset has changed. If we take art
as a highly liable sensor for current streams of societal
issues and trends [79] we can assume that we are here
at the brink of what will be aesthetically daily business
in our lives may it be in concert halls, shopping malls,
augmented or virtual realities.
For the works of present and contemporary utilization
of these entities it seems to be crucial being able to
understand and therefore use loudspeaker arrays as
musical instruments if composers of spatialized electronic
music want to make artistic use of the contingencies of
such systems. Therefore we need to and can approach
these objects on multiple levels of instrumentality as
described above. That does not mean to lose traditional
aspects of instrumentality, on the contrary, we have to
examine virtuosity, interaction or liveness in the light of a
more and more mediated environment to find meaningful
categories for orientation.
Furthermore we could enter a sphere of spatial
composition that works media-specific, actually building
spaces according to ongoing discourses and not only filling
them. That would actually redeem the claim of sound
sculpting and 3D objects. For engineers working in
the related fields it is fundamental to anticipate artistic
perception-based research—may it be art, entertainment
or marketing and PR as cultural practices in every day
live.
In the cases both of music and of science, detachment
involved the use of mechanical aids: scientific instruments
helped discover a world, musical instruments to build
one [69].
Acknowledgment
The authors would like to thank Elena Ungeheuer and
Franz Zotter for valuable comments.
References
[1]
Beyer, R. (1928): “Das Problem der ”kommenden
Musik”.” In: Die Musik,20(12):861–866.
[2]
Hardjowirogo, S.I. (2017): Musical Instruments in
the 21st Century - Identities, Configurations, Prac-
tices, chap. Instrumentality. On the Construction of
Instrumental Identity. Singapore: Springer Nature.
[3]
von Hornbostel, E.M.; Sachs, C. (1914): “Systematik
der Musikinstruments. Ein Versuch.” In: Zeitschrift
ur Enthnologie,46(4/5):553–590.
[4]
Ungeheuer, E. (2008): Zauberhafte Klangmaschinen
- Von der Sprechmaschine bis zur Soundkarte, chap.
Imitative Instrumente und innovative Maschinen?
- Musik¨asthetische Orientierungen der elektrischen
Klangerzeugung, 45–58. Mainz: Schott.
[5]
van Eck, C. (2017): Between Air and Electricity - Mi-
crophones and Loudspeakers as Musical Instruments.
New York: Bloomsbury.
[6]
Otondo, F. (2008): “Contemporary trends in the
use of space in electroacoustic music.” In: Organised
Sound,13(1):77–81.
[7]
Peters, N.; Marentakis, G.; McAdams, S. (2011):
“Current Technologies and Compositional Practices
for Spatialization: A Qualitative and Quantitative
Analysis.” In: Comput. Music J.,35(1):10–27.
[8]
Kendall, G.S.; Ardila, M. (2008): Computer Music
Modeling and Retrieval. Sense of Sounds, chap.
The Artistic Play of Spatial Organization: Spatial
Attributes, Scene Analysis and Auditory Spatial
Schemata, Lecture Notes, 125–138. Springer-Verlag.
[9]
Kendall, G.S. (2010): “Spatial Perception and
Cognition in Multichannel Audio for Electroacoustic
Music.” In: Organised Sound,15(3):228–238.
[10]
Kendall, G.S.; Cabrera, A. (2011): “Why Things
Don’t Work: What You Need To Know About
Spatial Audio.” In: Proc. of Intl. Comp. Music Conf.,
37–40, Huddersfield.
[11]
Stroppa, M. (1991): “Die musikalische Beherrschung
des Raums.” In: Musik in Gesellschaft anderer
unste.
[12]
Bates, E. (2009): The Composition and Performance
of Spatial Music. Ph.D. thesis, Department of
Music & Department of Electronic and Electrical
Engineering, Trinity College Dublin.
[13]
Peters, N. (2010): Sweet [re]production: Developing
sound spatialization tools for musical applications
with emphasis on sweet spot and off-center perception.
Ph.D. thesis, McGill University, Montreal.
[14]
Sharma, G.K. (2016): Komponieren mit skulpturalen
Klangph¨anomenen in der Computermusik. Ph.D.
thesis, Institute of Electronic Music and Acoustics,
University of Music and Performing Arts, Graz.
[15]
Belet, B. (2003): “Live performance interaction
for humans and machines in the early twenty-first
century: one composer’s aesthetics for composition
and performance practice.” In: Organised Sound,
8(3):305–312.
[16]
Alperson, P. (2008): “The Instrumentality of Music.”
In: J. Aesthet. Art Crit.,66(1):37–51.
[17]
Croft, J. (2007): “Theses on liveness.” In: Organised
Sound,12(1):59–66.
[18]
McKinnon, D. (2016): Experiencing Liveness
in Contemporary Performance - Interdisciplinary
Perspectives, chap. Broken Magic - The Liveness
of Loudspeakers, 267–271. New York: Routledge.
[19]
Zotter, F. (2009): Analysis and Synthesis of
Sound-Radiation with Spherical Arrays. Ph.D. thesis,
University of Music and Performing Arts Graz.
[20]
Spors, S.; Wierstorf, H.; Raake, A.; Melchior, F.;
Frank, M.; Zotter, F. (2013): “Spatial Sound With
Loudspeakers and Its Perception: A Review of
the Current State.” In: Proceedings of the IEEE,
101(9):1920–1938.
[21]
Zhang, W.; Samarasinghe, P.N.; Chen, H.; Abhaya-
pala, T.D. (2017): “Surround by Sound: A Review
of Spatial Audio Recording and Reproduction.” In:
Appl. Sci.,7(5):532–551.
[22]
Frank, M. (2014): “How to make Ambisonics sound
good.” In: Proc. of 7th Forum Acusticum, Krakow.
[23]
Stitt, P. (2015): Ambisonics and Higher-Order
Ambisonics for Off-Centre Listeners: Evaluation
of Perceived and Predicted Image Direction. Ph.D.
thesis, Queen’s University Belfast.
[24]
Wierstorf, H. (2014): Perceptual assessment of sound
field synthesis. Ph.D. thesis, TU Berlin.
[25]
Wierstorf, H.; Raake, A.; Spors, S. (2017):
“Assessing localization accuracy in sound field
synthesis.” In: J. Acoust. Soc. Am., vol. 141, 1111–
1119.
[26]
Frank, M.; Zotter, F. (2017): “Exploring the
perceptual sweet area in ambisonics.” In: Proc. of
142nd Aud. Eng. Soc. Conv., Berlin, #9727.
[27]
Mason, R. (2017): “How important is accurate
localisation in reproduced sound.” In: Proc. of 142nd
Aud. Eng. Soc. Conv., #9759, Berlin.
[28]
Wendt, F.; Zotter, F.; Frank, M.; H¨oldrich, R.
(2017): “Auditory Distance Control Using a Variable-
Directivity Loudspeaker.” In: Appl. Sci.,
7
(6):666–
682.
[29]
Leider, C. (2007): “Multichannel Audio in
Electroacoustic Music: An Aesthetic and Technical
Research Agenda.” In: IEEE Intl. Conf. on
Multimedia and Expo (ICME), 1890–1893, Beijing.
[30]
Wendt, F.; Sharma, G.K.; Frank, M.; Zotter, F.;
oldrich, R. (2017): “Perception of Spatial Sound
Phenomena Created by the Icosahedral Loudspeaker.”
In: Comput. Music J.,41(1):76–88.
[31]
Misdariis, N.; Nicolas, F.; Warusfel, O.; Causs´e, R.
(2001): “Radiation Control on Multi-Loudspeaker
Device : La Tim´ee.” In: Proc. of International
Computer Music Conference, Havana.
[32]
Bayle, F. (2007): “Space and more.” In: Organised
Sound,3(12):241 –249.
[33]
Harrison, J. (1998): “Sound, space, sculpture: some
thoughts on the ‘what’, ‘how’ and ‘why’ of sound
diffusion.” In: Organised Sound,3:117 – 127.
[34]
Wilson, S.; Harrison, J. (2010): “Rethinking
the BEAST: Recent developments in multichannel
composition at Birmingham ElectroAcoustic Sound
Theatre.” In: Organised Sound,15(3):239–250.
[35]
Arroyo, R.G. (2012): “Day&night, skying, and topoi.”
In: On the Choreography of Sound, PEEK Concert.
[36]
Tittel, C. (2009): “Sound art as sonifcation, and the
artistic treatment of features in our surroundings.”
In: Organised Sound,14(01):57–64.
[37]
Smalley, D. (1997): “Spectromorphology: explaining
sound-shapes.” In: Organised Sound,2:107–126.
[38]
Smalley, D. (2007): “Space-form and the acousmatic
image.” In: Organised Sound,12:35–38.
[39]
Gonz´alez-Arroyo, R. (2012): Raum: Konzepte in den
unsten, Kultur- und Naturwissenschaften, chap.
Towards a plastic sound object. Nomos Verlag,
Baden-Baden.
[40]
Nystr¨om, E. (2013): Topology of spatial texture in
the acoustic medium. Ph.D. thesis, City University
London.
[41]
Großmann, R. (2013): “Lautsprecher. Medien-
auff¨uhrungen - Vom kulturellen Wandel eines
¨
Ubertragungsmediums.” In: positionen. Texte zur
aktuellen Musik,95(Oberfl¨achen):18–20.
[42]
Sharma, G.K.; Frank, M.; Zotter, F. (2015):
“Towards understanding and verbalizing spatial sound
phenomena in electronic music.” In: inSONIC2015,
Aesthetics of Spatial Audio in Sound, Music and
Sound Art.
[43]
Auslander, P. (1999): Liveness: Performance in a
Mediatized Culture. Psychology Press.
[44]
Marshall, M.T.; Fraser, M.; Bennett, P.; Subrama-
niam, S. (2012): “Emotional response as a measure
of liveness in new musical instrument performance.”
In: Proceedings of the Conference on Human Factors
in Computing Systems (CHI).
[45]
Bown, O.; Bell, R.; Parkinson, A. (2014):
“Examining the perception of liveness and activity
in laptop music: Listeners’ inference about what
the performer is doing from the audio alone.”
In: Proceedings of the New Interfaces for Musical
Expression (NIME).
[46]
Berthaut, F.; Coyle, D.; Moore, J.; Limerick, H.
(2015): “Liveness through the lens of agency and
causality.” In: Proceedings of the New Interfaces for
Musical Expression (NIME).
[47]
Jackson, M.W. (2012): From Scientific Instruments
to Musical Instruments: The Tuning Fork, The
Metronome, And The Siren. Oxford University Press.
[48]
Truax, B. (1999): “Composition and diffusion: space
in sound in space.” In: Organised Sound,
3
:141–146.
[49]
Monteiro, F. (2007): “Virtuosity: Some (quasi
phenomenological) thoughts.” In: International
Symposium on Performance Science,, 315–320.
[50]
Kendall, G.S. (2010): “Meaning in electroacoustic
music and the everyday mind.” In: Organised Sound,
15(1):63–74.
[51]
Landy, L. (2007): Understanding the art of Sound
Organization. The MIT Press.
[52]
Sharma, G.K.; Zotter, F.; Frank, M. (2014):
“Orchestrating wall reflections in space by icosahedral
loudspeaker: findings from first artistic research
exploration.” In: ICMC-SMC, Athens.
[53]
Var`ese, E. (2004 (1936)): Audio Culture: Readings
in Modern Music, chap. The Liberatiin of Sound, 17
– 21. Continuum.
[54]
Kendall, G.S. (2010): “Spatial perception and
cognition in multichannel audio for electroacoustic
music.” In: Organised Sound,15(3):228–238.
[55]
Miranda, E.R.; Wanderley, M.M. (2006): New digital
musical instruments: Control and interaction beyond
the keyboard. Middleton, WI: A-R Editions.
[56]
Bates, E. (2012): “The social life of musical
instruments.” In: Ethnomusicology,56(3):363–395.
[57]
Jenkinson, T. (2004): “Collaborating with machines.”
In: Flux Magazine,3.
[58]
de Campo, A. (2014): “Lose control, gain influence—
concepts for metacontrol.” In: A. Georgaki; G.E.
Kouroupetroglou, eds., Proceedings of ICMC2014,
217–222, Athens: National and Kapodistrian
University of Athens.
[59]
Kim, J.H. (2007): Toward embodied musical
machines. Machines as agency. Artistic perspectives,
18–35. Transcript.
[60]
Magnusson, T. (2009): “Of epistemic tools: Musical
instruments as cognitive extensions.” In: Organised
Sound,14(2):168–176.
[61]
Cance, C.; Genevois, H.; Dubois, D. (2013):
La Musique et ses instruments, chap. What
is instrumentality in new digital devices? A
contribution from cognitive linguistics & psychology,
283–297. Paris: Delatour.
[62]
Stern, J. (1992): “Defining virtual reality: Di-
mensions determining telepresence.” In: Journal of
Communication,42(4):73–93.
[63]
Calleja, G. (2007): Digital Games as Designed
Experience: Reframing the Concept of Immersion.
Ph.D. thesis, Victoria University of Wellington.
[64]
Dawe, K. (2003): The cultural study of music: A
critical introduction, chap. The cultural study of
musical instruments., 274–283. Routledge.
[65]
Emmerson, S. (1998): “Aural landscape: musical
space.” In: Organised Sound,3:135–140.
[66]
Sanden, P. (2012): Liveness in Modern Music:
Musicians, Technology, and the Perception of
Performance. Routledge.
[67]
Emerson, G.; Egermann, H. (2017): “Gesture-
sound causality from the audience’s perspective:
investigating the influence of mapping perceptibility
on the aesthetic perception of new digital musical in-
struments.” In: Psychology of Aesthetics, Creativity,
and the Arts.
[68]
Godoy, R. (2006): “Gestural-sonorous objects:
embodied extensions of schaeffer’s conceptual
apparatus.” In: Organised Sound,11(2):149–157.
[69]
Burrows, D. (1987): “Instrumentalities.” In: The
Journal of Musicology,5(1):117–125.
[70]
Lefebvre, H. (2006): Raumtheorie - Grundlagentexte
aus Philosophie und Kulturwissenschaften;, chap. Die
Produktion des Raums. Suhrkamp.
[71]
Certeau, M. (1980): Invention du Quotidien -
deutsch: Kunst des Handelns. Merve Verlag.
[72] Loew, M. (2000): Raumsoziologie. Suhrkamp.
[73]
Trochimczyk, M. (2011): Space and Spatialization in
Contemporary Music: History and Analysis, Ideas
and Implementations. Moonrisepress.
[74]
Born, G. (2013): Music, Sound and Space:
Transformations of Public and Private Experience.
Cambridge University Press (February 25, 2013).
[75]
Barrett, N. (2010): “Ambisonics and acousmatic
space: a composer’s framework for investigating
spatial ontology.” In: Proceedings, 6th EMS.
[76]
Kendall, G.S. (2008): “”What is an Event?”.” In:
Proceedings of EMS08, Paris.
[77]
Sterne, J. (2003): The Audible Past, Cultural Origins
of Sound Reproduction. Duke University Press.
[78]
Otondo, F. (2007): “Creating spaces.” In: Computer
Music Journal,31(2):10–19.
[79]
Attali, J. (1985): Noise - The Political Economy of
Music. University of Minnesota Press.
Technical Report
Full-text available
This is the final report submitted to the Austrian Science Fund about the artistic research project "Orchestrating the Space by Icosahedral Loudspeaker" OSIL, AR 328-G21, gathering references to artworks, scientific works, workshops, etc. that were elaborated and presented in the area of the funded research, about the new musical instrument, the icosahedral loudspeaker (IKO) and its artistic research, psychoacoustics, and technology.
Article
Full-text available
The directivity of a sound source in a room influences the D/R ratio and thus the auditory distance. This study proposes various third-order beampattern pattern designs for a precise control of the D/R ratio. A comprehensive experimental study is conducted to investigate the hereby achieved effect on the auditory distance. Our first experiment auralizes the directivity variations using a virtual directional sound source in a virtual room using playback by a 24-channel loudspeaker ring. The experiment moreover shows the influence of room, source-listener distance, signal, and additional single-channel reverberation on the auditory distance. We verify the practical applicability of all the proposed beampattern pattern designs in a second experiment using a variable-directivity sound source in a real room. Predictions of experimental results are made with high accuracy, using room acoustical measures that typically predict the apparent source width.
Article
Full-text available
Abstract In this article, a systematic overview of various recording and reproduction techniques for spatial audio is presented. While binaural recording and rendering is designed to resemble the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears, soundfield recording and reproduction using a large number of microphones and loudspeakers replicate an acoustic scene within a region. These two fundamentally different types of techniques are discussed in the paper. A recent popular area, multi-zone reproduction, is also briefly reviewed in the paper. The paper is concluded with a discussion of the current state of the field and open problems.
Article
Full-text available
In contrast to their traditional, acoustic counterparts, digital musical instruments (DMIs) rarely feature a clear, causal relationship between the performer’s actions and the sounds produced. They often function simply as systems for controlling digital sound synthesis, triggering computer-generated audio. This study aims to shed light on how the level of perceived causality of DMI designs impacts audience members’ aesthetic responses to new DMIs. In a preliminary survey, 49 concert attendees listed adjectives that described their experience of a number of DMI performances. In a subsequent experiment, 31 participants rated video clips of performances with DMIs with causal and acausal mapping designs using the eight most popular adjectives from the preliminary survey. The experimental stimuli were presented in their original version and in a manipulated version with a reduced level of gesture-sound causality. The manipulated version was created by placing the audio track of one section of the recording over the video track of a different section. It was predicted that the causal DMIs would be rated more positively, with the manipulation having a stronger effect on the ratings for the causal DMIs. Our results confirmed these hypotheses, and indicate that a lack of perceptible causality does have a negative impact on ratings of DMI performances. The acausal group received no significant difference in ratings between original and manipulated clips. We posit that this result arises from the greater understanding that clearer gesture-sound causality offers spectators. The implications of this result for DMI design and practice are discussed.
Article
Full-text available
Sound field synthesis methods like Wave Field Synthesis (WFS) and Near-Field Compensated Higher Order Ambisonics synthesize a sound field in an extended area surrounded by loudspeakers. Because of the limited number of applicable loudspeakers the synthesized sound field includes artifacts. This paper investigates the influence of these artifacts on the accuracy with which a listener can localize a synthesized source. This was performed with listening tests using dynamic binaural synthesis to simulate different sound field synthesis methods and incorporated several listening positions. The results show that WFS is able to provide good localization accuracy in the whole listening area even for a low number of loudspeakers. For Near-Field Compensated Higher Order Ambisonics the achievable localization accuracy of the listener depends highly on the Ambisonics order and shows large localization deviations for low orders, where splitting of the perceived sound source was sometimes reported.
Book
Composers and sound artists have explored for decades how to transform microphones and loudspeakers from “inaudible” technology into genuinely new musical instruments. While the sound reproduction industry had claimed perfect high fidelity already at the beginning of the twentieth century, these artists found surprising ways of use – for instance tweaking microphones, swinging loudspeakers furiously around, ditching microphones in all kinds of vessels, or strapping loudspeakers to body parts of the audience. Between air and electricity traces their quest and sets forward a new theoretical framework, providing historic background on technological and artistic development, and diagrams of concert and performance set-ups. From popular noise musician Merzbow to minimalist classic Alvin Lucier, cult instrument inventor Hugh Davies, or contemporary visual artist Lynn Pook – they all aimed to make audible what was supposed to remain silent.
Book
By exploring the many different types and forms of contemporary musical instruments, this book contributes to a better understanding of the conditions of instrumentality in the 21st century. Providing insights from science, humanities and the arts, authors from a wide range of disciplines discuss the following questions: • What are the conditions under which an object is recognized as a musical instrument? • What are the actions and procedures typically associated with musical instruments? • What kind of (mental and physical) knowledge do we access in order to recognize or use something as a musical instrument? • How is this knowledge being shaped by cultural conventions and temporal conditions? • How do algorithmic processes ‘change the game’ of musical performance, and as a result, how do they affect notions of instrumentality? • How do we address the question of instrumental identity within an instrument’s design process? • What properties can be used to differentiate successful and unsuccessful instruments? Do these properties also contribute to the instrumentality of an object in general? What does success mean within an artistic, commercial, technological, or scientific context?
Article
[open access: http://www.mitpressjournals.org/doi/pdf/10.1162/COMJ_a_00396] The icosahedral loudspeaker (IKO) is able to project strongly focused sound beams into arbitrary directions. Incorporating artistic experience and psychoacoustic research, this article presents three listening experiments that provide evidence for a common, intersubjective perception of spatial sonic phenomena created by the IKO. The experiments are designed on the basis of a hierarchical model of spatiosonic phenomena that exhibit increasing complexity, ranging from a single static sonic object to combinations of multiple, partly moving objects. The results are promising and explore new compositional perspectives in spatial computer music.