ArticlePDF Available

Application of Wave Field Synthesis in electronic music and sound installations


Abstract and Figures

Wave Field Synthesis offers new possibilities for composers of electronic music and to sound artists to add the dimension of space to a composition. Unlike most other spatialisation techniques, Wave Field Synthesis is suitable for concert situations, where the listening area needs to be large. Using the software program "WONDER", developed at the TU Berlin, compositions can be made or setups can be created for realtime control from other programs, using the Open Sound Control protocol. Some pieces that were created using the software are described to illustrate the use of the program..
Content may be subject to copyright.
Application of Wave Field Synthesis in electronic music and sound
M.A.J. Baalman, M.Sc.
Electronic Studio, Communication Sciences, University of Technology, Berlin, Germany
Wave Field Synthesis offers new possibilities for
composers of electronic music and to sound artists
to add the dimension of space to a composition.
Unlike most other spatialisation techniques, Wave
Field Synthesis is suitable for concert situations,
where the listening area needs to be large. Using
the software program "WONDER", developed at
the TU Berlin, compositions can be made or setups
can be created for realtime control from other
programs, using the Open Sound Control protocol.
Some pieces that were created using the software
are described to illustrate the use of the program..
Wave Field Synthesis is a novel technique for
sound spatialisation, that overcomes the main
shortcoming of other spatialisation techniques, as
there is a large listening area and no "sweet spot".
This paper describes the software interface
WONDER that was made as an interace for
composers and sound artists in order to use the
Wave Field Synthesis technique. A short,
comprehensive explanation of the technique is
given, a description of the system used in the
project at the TU Berlin and the interface software,
followed by a description of the possibilities that
were used by composers.
Wave Field Synthesis
The concept of Wave Field Synthesis (WFS) is
based on a principle that was thought of in the 17th
century by the Dutch physicist Huygens (1690)
about the propagation of waves. He stated that
when you have a wavefront, you can synthesize the
next wavefront by imagining on the wavefront an
infinite number of small sources, whose waves will
together form the next wavefront (figure 1).
Based on this principle, Berkhout (1988)
introduced the wave field synthesis principle in
By using a discrete, linear array of loudspeakers
(figure 2), one can synthesize correct wavefronts in
the horizontal plane (Berkhout, De Vries and Vogel
1993). For a complete mathematical treatment is
referred to Berkhout (1988, 1993) and various other
papers and theses from the TU Delft
An interesting feature is that it is also possible
to synthesize a sound source in front of the speakers
(Jansen 1997), something which is not possible
with other techniques.
Sound Control Group, TU Delft,
Figure 2. The Wave Field Synthesis principle
Figure 1. The Huygens' Principle
Jansen (1997) derived mathematical formulae
for synthesising moving sound sources. He took
into account the Doppler effect and showed that for
its application one would need to have continuously
time-varying delays. He also showed that for slow
moving sources the Doppler effect is negligible and
one can resort to updating locations and calculating
filters for each location and changing those in time.
This approach was chosen in this project.
Additionally, in order to avoid clicks in playback,
an option was built in to crossfade between two
locations to make the movement sound smoother.
Theoretical and practical limitations of
Wave Field Synthesis
There are some limitations to the technique. The
distance between the speakers needs to be as small
as possible in order to avoid spatial aliasing. From
(Verheijen 1998) we have the following formula for
the frequency above which spatial aliasing occurs:
where c is the speed of sound in air, x the
distance between the speakers and α the angle of
incidence on the speaker array. Thus the frequency
goes down with increasing distance between the
speakers, but it also depends on the angle of
incidence, thus the location of the virtual source,
whether or not aliasing occurs.
Spatial aliasing has a result that a wave field is
not correctly synthesized anymore and artefacts
occur. This results in a bad localisable sound
source. This limitation is a physical limitation,
which can not really be overcome. However it
depends on the sound material whether or not this
aliasing is a problem from a listener's point of view.
In general, if the sound contains a broad spectrum
with enough frequencies below the aliasing
frequency, the source is still well localisable.
On the other end of the frequency spectrum
there is the problem that very low frequencies are
hard to play back on small speakers. For this can
however, just as in other spatialisation systems, a
subwoofer be added, as low frequencies are hard to
localise by the human ear.
Another limitation is that a lot of loudspeakers
are needed to implement the effect. Because of this,
there is research done into loudspeaker panels, so
that it is easier to build up a system.
Finally, a lot of computation power is needed,
as for each loudspeaker involved a different signal
needs to be calculated. With increasing compuation
power of CPU's, this is not really a big problem. At
the moment it is possible to drive a WFS-system
with commercially available PC's.
System setup at the TU Berlin
The prototype system in Berlin was created with
the specific aim to make a system for the use in
electronic music (Weske 2001). The system
consists of a LINUX PC driving 24 loudspeakers
with an RME Hammerfall Soundcard.
For the calculation (in real time) of the
loudspeaker signals the program BruteFIR by
is used. This program is capable of making
many convolutions with long filters in realtime.
The filter coefficients can be calculated with the
interface software described in this paper.
With the current prototype system it is possible
to play a maximum of 9 sound sources with
different locations in realtime, even when the
sources are moving. This is the maximum amount
of sources; the exact amount of sources that can be
used in a piece depend on the maximum filter
length used. A detailed overview of the capacity
was given in a paper presented at the ICMC in 2003
(Baalman 2003).
Interface software
In order to work with the system, interface
software was needed to calculate the necessary
filter coefficients. The aim was to create an
interface that allows composers to define the
movements of their sounds, independent of the
system on which it eventually will be played. That
is, the composer should be bothered as less as
possible with the actual calculations for each
loudspeaker, but instead be able to focus on
defining paths through space for his sounds.
The current version of the program WONDER
(Wave field synthesis O
f New Dimensions of
Electronic music in Realtime) allows the composer
to do so. It allows the composer to work in two
ways with the program: either he creates a
composition of all movements of all the sound
sources with WONDER, using the composition
tool, or he defines a grid of points that he wants to
use in his piece and controls the movement from
another program using the OpenSoundControl
protocol (Wright e.a, 2003). The main part of the
program is the play engine which can play the
composition created or move the sources in
realtime; a screenshot is given in figure 3.
The array configuration can be set in the
program. It is possible to define the position of
various array segments through a dialog.
WONDER includes a simple room model for
calculation of reflections. The user can define the
position of four walls of a rectangular room, an
absorption factor and the order of calculation. The
calculations are done with the mirror image source
model (see also Berkhout 1988).
Torger, A., BruteFIR,
αsin2 x
Experiences with composers
During the development of the program, various
compositions were made by different composers, to
test the program and to come up with new options
for the program. These compositions were
presented at different occasions, amongst which
festivals like Club Transmediale in Berlin
(February 2003) and Electrofringe in Newcastle,
Australia (October 2003). I will elaborate about two
compositions and one sound installation.
Marc Lingk, a composer residing in Berlin,
wrote a piece called Ping-Pong Ballet. The sounds
for this piece were all made from ping-pong ball
sounds, which were processed by various
algorithms, alienating the sound from its original.
Using these sounds as a basis, the inspiration for
the movements was relatively easy as the ping-pong
ball game provides a good basis for the distribution
in space of the sounds. In this way he created
various loops of movement for the various sounds
as depicted in figure 4. Paths 1 & 2 are the paths of
the ball bouncing on the table, 3 & 4 of the ball
being hit with the bat, 5 & 6 of multiple balls
bouncing on the table, 7 & 8 of balls dropping to
the floor. Choosing mostly prime numbers for the
loop times, the positions were constantly changing
in relative distance to each other. The movement
was relatively fast (loop times were between 5 and
19 seconds). In the beginning, the piece gives the
impression of a ping-pong ball game, but as it
progresses the sounds become more and more
dense, creating a clear and vivid spatial sound
In the composition "Beurskrach" created by
Marije Baalman, four sources were defined, but
regarded as being points on one virtual object, i.e.
these points made a common movement; the sound
material for these four points were also based on
the same source material, but slightly different
filterings of this, to simulate a real object where
from different parts of the object different filterings
of the sound are radiated. During the composition,
the object comes closer from afar and even comes
in front of the loudspeakers, there it implodes and
scatters out again, making a rotating movement
behind the speakers, before falling apart in the end.
See figure 5 for a graphical overview of this
The sound installation "Scratch", that was
presented during the Linux Audio Conference,
makes use of the OSC-control over the movements.
The sound installation is created with
SuperCollider, which makes the sound and which
sends commands for the movement to WONDER.
The concept of the sound installation is to create a
Figure 4. The user interface of the play engine of
Figure 3. Overview of the movements of the composition "Ping Pong Ballet"
(screenshot from a previous version of WONDER)
kind of sonic creature, that moves around in the
space. Depending on internal impulses and on
external impulses from the visitor (measured with
sensors), the creature develops itself, and makes
different kinds of sounds, depending on its current
state. The name "Scratch" was chosen because of
two things: as the attempt to create such model for
a virtual creature was the first one, it was still a
kind of scratch for working on this concept. The
other reason was the type of sound, which were
kind of like scratching on some surface.
Conclusions and future work
The program WONDER provides a usable
interface for working with Wave Field Synthesis, as
shown by the various examples of compositions
that have been made using the program.
Future work will be, apart from bug fixing, on
integrating BruteFIR further into the program, in
order to allow for more flexible use in realtime.
Also an attempt will be made to incorporate parts of
SuperCollider into the program, as this audio
engine has a few advantages over BruteFIR that
could be used. Also, there will be work done on
more precise synchronisation possibilities for use
with other programs.
Other work will be done on creating the
possibility to define more complex sound sources
(with a size and form) and implementing more
complex room models.
WONDER is created by Marije Baalman. The
OSC part is developed by Daniel Plewe.
Baalman, M.A.J., 2003, Application of Wave Field
Synthesis in the composition of electronic music,
International Computer Music Conference 2003,
Singapore, October 1-4, 2003
Berkhout, A.J. 1988, A Holographic Approach to
Acoustic Control, Journal of the Audio Engineering
Society, 36(12):977-995
Berkhout, A.J., Vries, D. de & Vogel, P. 1993, Acoustic
Control by Wave Field Synthesis, Journal of the
Acoustical Society of America, 93(5):2764-2778
Jansen, G. 1997, Focused wavefields and moving virtual
sources by wavefield synthesis, M.Sc. Thesis, TU
Delft, The Netherlands
Huygens, C. 1690, Traite de la lumiere; ou sont
expliquees les causes de ce qui luy arrive dans la
reflexion et dans la refraction et particulierement
dans l'etrange refraction du cristal d'Islande; avec un
discours de la cause de la pesanteur, Van der Aa, P.,
Leiden, The Netherlands
Verheijen, E.N.G. 1998, Sound Reproduction by Wave
Field Synthesis, Ph.D. Thesis, TU Delft, The
Weske, J. 2001, Aufbau eines 24-Kanal Basissystems zur
Wellenfeldsynthese mit der Zielsetzung der
Positionierung virtueller Schallquellen im
Abhörraum, M.Sc. Thesis, TU Chemnitz/TU Berlin,
Wright, M., Freed, A. & Momeni, A. 2003,
“OpenSoundControl: State of the Art 2003”, 2003
International Conference on New Interfaces for
Musical Expression, McGill University, Montreal,
Canada 22-24 May 2003, Proceedings, pp. 153-160
Figure 5. Overview of the movements of the sound
sources of the composition "Beurskrach"
Figure 6. The sound installation "Scratch"
during the Linux Audio Conference. In the ball
are accelerometers to measure the movement of
the ball, which influences the sound installation
(photo by Frank Neumann).
spat-scene is a graphical user interface and spatial sound processor embedded in computer-aided composition software. It combines the interface and the rendering engine of Ircam Spatialisateur with a compositional and temporal control of spatial scene descriptors. Previous studies of the composition process as well as interviews with composers emphasized the needs for tools integrated within compositional environments allowing for interactive input, visualization and manipulation of spatio-temporal data in spatial scene descriptions. The spat-scene interface addresses such requirements by providing a timeline view synchronized with the spatial view of the Spatialisateur, three-dimensional editors as well as several internal and external rendering possibilities, all in a unified framework. spat-scene is implemented in the OpenMusic visual programming language, and so can be articulated with other musical structures and compositional processes. Through the case study of sound spatialization, this work tackles the challenge of authoring real-time processes in a compositional context.
Full-text available
Wave Field Synthesis offers new possibilities for composers of electronic music to add the dimension of space to a composition. Unlike most other spatialisation techniques, Wave Field Synthesis is suitable for concert situations, where the listening area needs to be large. It is shown that an affordable system can be built to apply the technique and that software can be written which makes it possible to make compositions, not being dependent on the actual setup of the system, where it will be played. Composers who have written pieces for the system have shown that with Wave Field Synthesis one can create complex paths through space, which are perceivable from a large listening area.
Full-text available
The acoustics in auditoria are determined by the properties of both the direct sound and the later arriving reflections. If electroacoustic means are used to repair disturbing deficiencies in the acoustics, one has to cope with unfavorable side effects such as localization problems and artificial impressions of the reverberant field (electronic flavor). To avoid those side effects, the concept of electroacoustic wave front synthesis is introduced. The underlying theory is based on the Kirchhoff-Helmholtz integral. In this new concept the wave fields of the sound sources on stage are measured by directive microphones; next they are electronically extrapolated away from the stage, and finally they are re-emitted in the hall by one or more loudspeaker arrays. The proposed system aims at emitting wave fronts that are as close as possible to the real wave fields. Theoretically, there need not be any differences between the electronically generated wave fields and the real wave fields. By using the image source concept, reflections can be generated in the same way as direct sound.
In the past the temporal aspects of sound fields have obtained considerably more attention than the spatial aspects. Therefore most electroacoustic arguments are based on temporal frequencies and temporal reflection sequences. It is shown that sound control should be based on 'acoustic holography' featuring the spatial reconstruction of direct and reflected wave fields with desired wavefront properties at each moment of time. As holographically reconstructed sound fields cannot be distinguished from true sound fields, it is argued that holographic sound systems are the ultimate in sound control. A description is given of the holographic sound system ACS (patent pending) and measurements are shown.