ArticlePDF Available

ADAPTING POLYPHONIC PICKUP TECHNOLOGY FOR SPATIAL MUSIC PERFORMANCE

Authors:

Abstract and Figures

This paper describes how polyphonic pickup technology can be adapted for the spatialization of electric stringed instruments such as the violin, cello and guitar. It is pro- posed that mapping the individual strings to different spa- tial locations integrates the spatial diffusion process with the standard musical gestures of the performer. The devel- opment of polyphonic guitar processing is discussed and a method of adapting MIDI guitar technology for this pur- pose is presented. The compositional and technical strata- gies used with various augmented instruments is presented along with an analysis of three compositions by the author for spatialized hexaphonic guitar.
Content may be subject to copyright.
ADAPTING POLYPHONIC PICKUP TECHNOLOGY FOR SPATIAL
MUSIC PERFORMANCE
Enda Bates Dermot Furlong
Dept. of Electronic and Electrical Eng.
Trinity College Dublin
Donnacha Dennehy
ABSTRACT
This paper describes how polyphonic pickup technology
can be adapted for the spatialization of electric stringed
instruments such as the violin, cello and guitar. It is pro-
posed that mapping the individual strings to different spa-
tial locations integrates the spatial diffusion process with
the standard musical gestures of the performer. The devel-
opment of polyphonic guitar processing is discussed and
a method of adapting MIDI guitar technology for this pur-
pose is presented. The compositional and technical strata-
gies used with various augmented instruments is presented
along with an analysis of three compositions by the author
for spatialized hexaphonic guitar.
1. INTRODUCTION
1.1. Background and Motivation
Space is often used as an explicit compositional parameter
in electroacoustic music. Many performances of acous-
matic music, i.e. music without any live instrumental per-
formers, are presented using pre-diffused multi-track com-
positions and a loudspeaker array or through the live dif-
fusion of a stereo track to a loudspeaker orchestra.
The addition of live performers introduces a number
of difficulties in the performance of spatial music. The
spatial distribution of performers around the audience can
be very effective but is logistically challenging and also
highly dependent on the specific layout of the performance
space. Compositions for live performers on stage and spa-
tialized audio are particularly challenging as the static lo-
cation of the live performers provides a frontally-biased
visual and audible focus which can conflict with the non-
visual and surrounding spatialized audio. Linking the mu-
sical gestures of the instrumental performer with the spa-
tial gestures of the electronic part is a significant chal-
lenge, as the spatialization process is often not related to
the musical instrument in any obvious way. In addition
it is rarely practical or possible for a single performer to
concurrently play and diffuse a musical instrument.
While various new musical interfaces have been devel-
oped which feature some form of spatial control, the id-
iosyncratic nature of these devices mean they are unlikely
to become widely adopted. Augmented instruments, i.e.
traditional musical instruments featuring additional hard-
ware or sensors present a possible solution to this prob-
lem, as they can potentially combine existing and sophis-
ticated instrumental practice with spatial or timbral pro-
cessing algorithms. One form of augmentation specific
to stringed instruments is the use of polyphonic pickups
which produce a separate audio signal for each string. The
discrete multi-channel output of these instruments would
seem to be very suitable for spatialization to a multi-channel
loudspeaker array. By linking the spatial location to the
choice of string, the spatialization process could be syn-
chronized to the physical performance of the instrument.
In addition, spatialization algorithms and other processes
could be applied to each individual string as required.
This signal processing approach to instrument augmen-
tation has the advantage that the performer does not need
to learn any new gestures or instrumental techniques. In
addition, the necessary hardware has become widely avail-
able, particularly for electric guitar, and can often be retro-
fitted non-destructively to an existing instrument.
2. POLYPHONIC PICKUPS
Polyphonic pickups (also called divided or split pickups)
are used to generate a separate audio signal for each string
in instruments such as the guitar, violin and cello. They
have been widely used over the past three decades to de-
tect and convert the pitch coming from individual strings
into MIDI messages. The majority of these systems how-
ever do not provide split signals for external processing,
preferring instead to multiplex the signals into a single ca-
ble which can then be easily connected to a MIDI con-
verter. This emphasis on MIDI capability is changing
however and more devices are becoming available which
allow for the individual processing of each string, in par-
ticular for the electric guitar.
2.1. Polyphonic Pickups for the Electric Guitar
Polyphonic pickups (hexaphonic in the case of a guitar)
have been used since the seventies to provide seperate
audio signals for conversion to MIDI. Recently however
some dedicated systems such as the Gibson HD.6X-Pro
have been developed specifically for polyphonic process-
ing. The initial prototyping of the instrument was carried
Figure 1. Roland 13-pin Wiring Diagram
out by Adrian Freed and the Guitar Innovation Group at
UC Berkeley’s Center for New Music and Audio Tech-
nologies (CNMAT) [1]. Further research has since been
carried out at CNMAT on polyphonic guitar effects based
on vocal-tract modeling, frequency localized distortion and
coordinated equalization [2].
The co-director of CNMAT, David Wessel has con-
ducted research on augmented instrument design and in-
teractive computer music with a particular focus on the
live performance of improvised computer music. Wes-
sel had used polyphonic pickups in performances such
as “Situated Trio”, an interactive live performance for a
hexaphonic guitarist and two computer musicians with ex-
pressive controllers [3]. In this piece, the polyphonic gui-
tar signal is processed by the two computer musicans us-
ing various algorithms such as granulation, looping, cross-
synthesis and spatialization. The polyphonic guitar sig-
nal is also converted to MIDI to provide a high level dis-
crete event representation of the guitarist’s performance
for triggering automated computer based processes.
2.2. Adapting MIDI Guitar Systems for Polyphonic
Processing
The adaptation of existing MIDI guitar systems is an alter-
native and cost effective method for deriving a polyphonic
signal from an electric guitar. Popular MIDI guitar sys-
tems by Roland, RMC and AXON use a 13-pin connector
which carries six individual signals from the hexaphonic
transducer, a mono audio feed from the guitar’s normal
electrics, and some controls specific to the Roland system.
As we can see from the wiring diagram in Fig.1, the
individual string signals from the 13-pin connector can be
accessed simply by wiring the approproate pins to several
1/4 inch jack connectors. A schematic for such a breakout
box is shown in Fig.2 [4]. The specific implementation of
this breakout box for two of the more popular MIDI guitar
systems is detailed as follows.
2.2.1. RMC Pickups
RMC produce high quality hexaphonic pickups based on
piezo-electric elements mounted under the individual string
saddles at the bridge and can be found in Godin and Brian
Moore guitars. The pickup is fed into an on-board pre-
Figure 2. 13-pin Breakout Box Schematic[4]
amplifier and routed to a standard 13-pin connector. As
the RMC pickup uses an onboard power source, the sim-
ple schematic shown in Fig.2 is all that’s required to obtain
the individual string signals. RMC also produce a com-
mercial breakout box that splits the polyphonic signal into
six seperate audio channels along with some additional
features (www.rmcpickup.com).
2.2.2. Roland GK-2 and GK-2A Pickups
The Roland GK-2 (single coil) and GK-2A (humbucker)
consist of an electromagnetic hexaphonic pickup mounted
at the guitar bridge. This system is available in various
“Roland Ready” electric guitars but can also be mounted
externally on an existing instrument. Due to its low cost
and nondestructive installation, the Roland GK range has
become one of the most widely available and popular MIDI
guitar systems. The simple breakout box shown in Fig.2
will however not be sufficient to derive the split audio sig-
nals as these pickups are designed to be powered by an
external MIDI convertor device. A simple circuit using
two PP3 9V batteries must therefore be incorporated into
the breakout box design when using the Roland GK sys-
tem.
3. POLYPHONIC STRINGS
The use of traditional instruments to control or generate
synthetized sounds is a recurring theme in electroacous-
tic compositional research. Much of this research has fo-
cused on developing traditional instruments augmented with
additional sensors. The mapping of musical gestures to
electronic processes is a critical issue in the design of any
augmented instrument. The composer David Wessel sug-
gests that “musical control intimacy and virtuosity require
both spatial and temporal precision in the sensing of ges-
tures (Control intimacy refers to a tight connection be-
tween a body movement and change in an auditory fea-
ture)” [5]. The mapping of individual strings to spatial
location would seem to be a suitable approach in this re-
gard.
The electronic violin developed by Max Mathews and
the Hyperinstruments developed by Todd Machover are
two examples of augmented stringed instruments which
incorporate polyphonic pickup technology. As we shall
see, the polyphonic output of these instruments was used
extensively by the composers who wrote for them. As
many manufacturers now produce electric instruments with
polyphonic outputs, this approach represents a viable and
generalised solution to stringed instrument augmentation
that is not tied to specific hardware.
3.1. Augmented Strings
The electronic violin developed by Max Mathews in 1986
was an attempt to use the considerable dynamic, expres-
sive timbral range and highly evolved instrumental tech-
nique of the violin as the musical source, modifier, and
controller of various real-time performance networks [6].
The electronic violin used four contact microphones
inserted into the bridge to pickup the signal. Mathews
noticed that when the outputs from the four strings were
combined and passed through a single amplifier and speaker,
nonlinearities resulted in unpleasant combination tones [6].
To eliminate this problem, Matthews used a separate am-
plifier and loudspeaker for each string. When the com-
poser Richard Boulanger later came to compose a piece
for the instrument, he commented that the discrete four-
channel output of the instrument significantly directed the
composition in a number of ways”, specifically toward
a composed spatial component. In the resulting piece,
“Three Chapters from the book of Dreams”, Boulanger
supports and contrasts the linear counterpoint and com-
pound melody with a concurrent spatial counterpoint. He
comments “By assigning the output of each string to a sep-
arate speaker, the audience is given the unique sense that
they are seated within the violin body. This spatial com-
ponent is based on the inherent design of the instrument
and its antiphonal treatment is at the same time quite old
and quite new. [6]
The Hyperinstrument group at MIT Media Lab have
been researching and developing augmented instruments
since the late eighties. The first of these, the Hypercello
was completed in 1991 and combined an electroacoustic
cello with additional sensors to provide data on various
performance parameters such as bow position, placement
and pressure, string and finger position and pitch tracking.
The composer Tod Machover worked with the reknowned
cellist Yo-Yo Ma to create “Begin Again Again....”, an
interactive composition in which different playing tech-
niques such as tremolo, bow bounce, pizzicato and legato
are mapped to various electronic processes including spa-
tial movement [7].
4. COMPOSING FOR THE HEXAPHONIC
GUITAR
Writing for a hexaphonic guitar in which each string is in-
dividually spatialized requires the composer to carefully
consider which strings will be used to produce the chosen
pitches. In this scenario, the routing and particular spa-
tialization method used will inform the composition in a
very real and tangible way. In the compositions discussed
Figure 3. Routing for Etude No. 1
Figure 4. Spatial sequence to Etude No. 1
here, the six strings are routed to individual loudspeakers
as this reduced the localization issues relating to virtual
images in a performance setting [8]. Static sources were
generally used as it was found that dynamic trajectories
were not percieved well due to the relatively short sustain
of plucked notes on a guitar.
The compositions were realised using a Godin LGXT
electric guitar prefitted with a RMC piezo-electric hexa-
phonic pickup and heavy gauge electric guitar strings. The
13-pin guitar output was connected to a MOTU 828 multi-
channel souncard via a breakout box (see Fig.2) which
routed pins 1 to 6 of the 13-pin cable to six 1/4 inch ana-
log connectors. The multiple signals were processed in
the Max MSP environment and routed to a loudspeaker
array.
1
4.1. Etude No. 1 for Hexaphonic Guitar
In this piece the hexaphonic guitar output is routed to a
six-channel loudspeaker array via four synchronized tape
delay effects as shown in Fig.3.
The opening section consists of a sequence of two-note
chords on the four high strings which mark out the spatial
boundaries of the piece as shown in Fig.4. Each of the
two iterations of the four chords is followed by a six-note
coda which introduces the two lower strings and their as-
sociated spatial locations.
The main body of the piece consists of six voicings of
a six-note chord based on a quartal harmony. Each chord
is played as a repetitive two note pattern on the two low
strings accompanied by single delayed notes which cycle
1
Selected binaural recordings and scores are available at
www.endabates.net.
Figure 5. Etude No. 2, Tuning and Rhythmic Pattern
sequentially through the four high strings. The four de-
lay effects are synchronized and set to repeat at eight note
intervals. The piece ends with a shorter iteration of the
initial section to complete the ABA structure.
4.2. Etude No. 2 for Hexaphonic Guitar and Electron-
ics
In this work-in-progress the three high strings are routed
to the back of the array while the three low strings are
routed to the front. The opening and closing sections of
the ABA structure is based on the rhythmic development
of a patteren (Fig.5) of rapid alternating staccato triads
which are seperated in terms of space and pitch.
The longer middle section incorporates a multichannel
tape part of granulated and time-stetched guitar notes spa-
tialized using the Vector Base Amplitude Panning exter-
nals for Max MSP [9]. The guitarist triggers the various
granulated notes using a MIDI foot controller which also
alters the spatial location of the guitar strings. In each
case the granulated note is preceded by a played note from
the same spatial location, in this way, the tape part is per-
cieved to originate from the plucked guitar notes and grad-
ually expands spatially across the array.
4.3. Etude No. 3 for Hexaphonic Guitar and Electron-
ics
In this work dynamically spatialized drones are contrasted
with more rhythmical passages performed using various
extended playing techniques. The strings are consecu-
tively routed in pairs through three instances of Ambi-
ence, the freeware reverb VST plugin by Smartelectronix
(Fig.6). Each plucked interval is sustained using the re-
verb hold function which is triggered using a MIDI foot
controller. After each of three three intervals has been lay-
ered a number of times in this way, the combined drone is
is dynamically routed to a spatialization algorithm which
chaotically pans the drone around the entire array. This
entire process is repeated three times with increasing in-
tensity, culminating in a loud crescendo.
5. SUMMARY
In this paper we have described how polyphonic pickup
technology has been used in augmented stringed instru-
ments. The routing of individual strings to a loudspeaker
array has been suggested as a method of mapping the spa-
tialization process to the gestures of the performer. The
implementation of MIDI guitar technology for polyphonic
Figure 6. Etude No. 3, Tuning and Interval Sequence
processing is described and three compositions for spatial-
ized polyphonic guitar are presented.
6. REFERENCES
[1] B. Yeung, “Guitar dreams, an interview with Adrian
Freed, www.sfweekly.com, 2004.
[2] A. Jehan, T. Freed and R. Dudas, “Musical appli-
cations of new filter extensions to max/msp, Inter-
national Computer Music Conference, pp. 504–507,
1999.
[3] M. Wessel, D. Wright and J. Schott, “Situated Trio -
an interactive live performance for a hexaphonic gui-
tarist and two computer musicians, in Proceedings
of the 2002 Conference on New Instruments for Mu-
sical Expression (NIME-02),Dublin, Ireland, May 24-
26, 2002.
[4] J. Berg, “Midi breakout box, www.unfretted.com,
2007.
[5] D. Wessel, An enactive approach to computer music
performance, Le Feedback dans la Creation Musical,
pp. 93–98, 2006.
[6] R. Boulanger, “Toward a new age of performance:
Reading the book of dreams with the Mathews elec-
tronic violin, Perspectives of New Music, vol. 24, no.
2, pp. 130–155, 1986.
[7] T. Machover, “Hyperinstruments - MIT Media Lab
Research Report, 1992.
[8] E. Bates, G. Kearney, D. Furlong, and F. Boland, “Lo-
calization Accuracy of Advanced Spatialization Tech-
niques in Small-Sized Concert Halls, in 153rd Meet-
ing of the Acoustical Society of America , June 2007.
[9] V. Pulkki, “Virtual sound source positioning using
Vector Base Amplitude Panning, Journal of the Au-
dio Engineering Society, vol. 45, pp. 456–466, 1997.
... For instance, a short analysis window of the bass sound is needed for low latency performance, but there could be a new binary descriptor that would flag up the presence of an attack during a given grain. Such a descriptor could rely on a sophisticated transient analysis à la Rasamimanana[16], or could even join up different sources of information: a faster dedicated multi-string MIDI guitar system as in [4] could flag quickly the onset, its fundamental and amplitude, while slower audio streams analyses could provide more subtle timbral information over time with a given latency. On the other side, the synthesis corpus could have much shorter grains for those with the attack flag detected true, and longer, smoother grains for the sustain parts of the notes where the flag is false. ...
... For instance, a short analysis window of the bass sound is needed for low latency performance, but there could be a new binary descriptor that would flag up the presence of an attack during a given grain. Such a descriptor could rely on a sophisticated transient analysis à la Rasamimanana [16], or could even join up different sources of information: a faster dedicated multi-string MIDI guitar system as in [4] could flag quickly the onset, its fundamental and amplitude, while slower audio streams analyses could provide more subtle timbral information over time with a given latency. On the other side, the synthesis corpus could have much shorter grains for those with the attack flag detected true, and longer, smoother grains for the sustain parts of the notes where the flag is false. ...
Conference Paper
Full-text available
In this paper, the authors describe how they use an electric bass as a subtle, expressive and intuitive interface to browse the rich sample bank available to most laptop owners. This is achieved by audio mosaicing of the live bass performance audio, through corpus-based concatenative synthesis (CBCS) techniques, allowing a mapping of the multi-dimensional expressivity of the performance onto foreign audio material, thus recycling the virtuosity acquired on the electric instrument with a trivial learning curve. This design hypothesis is contextualised and assessed within the Sandbox#n series of bass+laptop meta-instruments, and the authors describe technical means of the implementation through the use of the open-source CataRT CBCS system adapted for live mosaicing. They also discuss their encouraging early results and provide a list of further explorations to be made with that rich new interface.
Thesis
The forces used in the realisation of a musical work have typically been conceived of as independent to the compositional process. For example, composing a piece for a wind quintet historically has meant writing for the five instruments that make up that culturally defined ensemble configuration, and writing for the standard way in which those instruments are constructed and presented. This has changed within recent New Music. It has become commonplace for works to involve the adaptation of the physical properties of the instruments, and the application of electronic technology to acoustic instruments. This change marks an expansion of where the creative act of composition is located to include the construction of the set-up. A particular focus of engaging with the composition of the set-up is the creation of uncanny experiences of conventional musical instruments. This approach opens up possibilities to engage and transform established relationships between instruments, performers, and audiences. This project investigates the creation of hybrid set-ups that combine acoustic instruments with electronic technology and investigates how musical works can be composed for these bespoke set-ups. The submitted compositions engage this two-fold act of composition to focus on instrumental construction; instrumental techniques; and the cultural ground instruments occupy. Chapters one and two present frameworks to open up ways of theorising such works, the possibilities afforded by such works, and their effects. These ideas are developed in the two case studies that follow: Stockhausen’s Mikrophonie I (chapter three) and Nemtsov’s Drummed Variation (chapter four). Chapter five provides commentary on how these issues relate to each of the submitted works, and chapters six and seven draw out larger thematic concerns across the works.
Chapter
Full-text available
The term sound spatialisation indicates a group of techniques for organising and manipulating the spatial projection and movement of sound in a physical or virtual listening environment. An Ambisonic Guitar System has been devised and realised, with the project name GASP: 'Guitars with Ambisonic Spatial Performance'. GASP is an ongoing University of Derby research project, where our interest in Ambisonic algorithmic research and guitar sound production is combined with off-the-shelf hardware and bespoke software to create an Ambisonic based immersive guitar sound system. It is an innovative audio project, fusing the musical with the technical, combining individual string timbralisation with Ambisonic immersive sound. The GASP guitars have multichannel pickups (aka hex, hexaphonic or divided pickups) installed. The arpeggiation effect can be blended with the output from the other mono guitar pickups such that a sustained strummed chord can be output from the mono pickup output, whilst the MIDI sequence picks out and enhances the selected programmed rhythmic notes.
Chapter
Full-text available
The term sound spatialisation indicates a group of techniques for organising and manipulating the spatial projection and movement of sound in a physical or virtual listening environment. An Ambisonic Guitar System has been devised and realised, with the project name GASP: 'Guitars with Ambisonic Spatial Performance'. GASP is an ongoing University of Derby research project, where our interest in Ambisonic algorithmic research and guitar sound production is combined with off-the-shelf hardware and bespoke software to create an Ambisonic based immersive guitar sound system. It is an innovative audio project, fusing the musical with the technical, combining individual string timbralisation with Ambisonic immersive sound. The GASP guitars have multichannel pickups (aka hex, hexaphonic or divided pickups) installed. The arpeggiation effect can be blended with the output from the other mono guitar pickups such that a sustained strummed chord can be output from the mono pickup output, whilst the MIDI sequence picks out and enhances the selected programmed rhythmic notes.
Conference Paper
Full-text available
Automatic music transcription is a widely studied problem, Typically, recordings that are used for transcription are taken from standard instruments, in the case of electric stringed instruments—such as the electric guitar—the recordings are captured from a standard pick-up, which unwantedly mixes the signals from each string and complicates subsequent analysis. We propose an approach to electric guitar transcription where the signal generated by each string at the guitar pickup is captured and analysed separately; thus providing six separate signals as opposed to one mixed signal, which enables finger positions to be identified. Such an instrument is known as a hexaphonic guitar and is a popular instrument for spatial music performances. We build the equipment necessary to modify a standard electric guitar into a hexaphonic guitar, and present an application of Non-Negative Matrix Factorisation to the task of transcription—where a basis for each note on the fretboard is learned and fitted to a magnitude spectrogram of the hexaphonic recording, which then undergoes a nonlinearity generating a piano roll representation of the music performance.
Article
Full-text available
A comparison of several spatialization systems is presented in terms of their localization accuracy under the nonideal listening conditions found in typical small concert halls. Of interest is the effect of real reverberant conditions, noncentral listening positions, and nonsymmetrical speaker arrays on source localization. The data are presented by comparison of empirical binaural measurements and perceptual listening tests carried out using Ambisonics, VBAP, SPAT∼, and Delta Stereophony systems. The listening tests are conducted by comparing the localization of phantom sources generated by the spatialization systems to sources generated by reference loudspeakers. The reference and phantom sources are presented at front, side, and back locations about a nine‐listener audience, and the systems are tested in a random order with a calibrated 16‐loudspeaker array situated nonsymmetrically around the audience area. The binaural recordings are compared to the subjective measurements of localization accuracy through the interaural level and time difference cues for each source position.
Article
Full-text available
We introduce three new Max/MSP signal processing extensions that efficiently implement IIR filters: biquadbank~, peqbank~ and resonators~. After describing their common properties, specific features and new support objects, we conclude the paper with a summary of their musical applications.
Article
MUCH CURRENT COMPOSITIONAL RESEARCH concerns the way in which the concert performer's highly evolved instrumental technique and the considerable dynamic, expressive, and timbral range of his acoustic instrument might combine to serve as the musical source, modifier, and controller in various real-time performance networks.
Article
We describe a 15 to 20 minute interactive live perform-ance work entitled Situated Trio for augmented guitar and two computer musicians with expressive controllers. This work brings into focus a number of issues concern-ing musically expressive control and interaction among performers. INTRODUCTION This paper accompanies and describes a live interactive performance by guitarist John Schott and computer mu-sicians Matthew Wright and David Wessel, each of whom use a combination of controllers. The proposed work involves composed situations that define the modes of interaction among the musicians and the mu-sical materials available to them. Emphasis is on musi-cal dialog and refined control in improvisation. Our work strives to give long-range harmonic continuity a place in improvisation. This is not simply the privi-leging of certain sonorities throughout the duration of the piece, but a sensitivity to pitches and combinations of pitches, and their growth over the course of a piece. Thus we have designed a range of algorithms to analyze and respond to real-time improvised musical data.
Article
Résumé Experimental computer music performance prac-tices designed from an enactive view of musi-cal perception, cognition, and motor control are described. The enactive view emphasizes the role of sensory-motor engagement in musical expe-rience. The enabling elements required for the approach include, rich and precise gestural in-terfaces, connectivity devices, real-time gesture analysis and mapping software, richly controlled sound synthesis and processing, and the compo-sition of musical worlds in the form of generative algorithms worthy of extensive exploration. These practices in human-instrument symbiosis require a commitment on the part of musicians to deve-lop both refined motor skills and engagement in the development and refinement of real-time soft-ware.
Guitar dreams, an interview with Adrian Freed
  • B Yeung
B. Yeung, "Guitar dreams, an interview with Adrian Freed," www.sfweekly.com, 2004.
Hyperinstruments -MIT Media Lab Research Report
  • T Machover
T. Machover, " Hyperinstruments -MIT Media Lab Research Report, " 1992.
Situated Trioan interactive live performance for a hexaphonic guitarist and two computer musicians
  • M Wessel
  • D Wright
  • J Schott
M. Wessel, D. Wright and J. Schott, "Situated Trioan interactive live performance for a hexaphonic guitarist and two computer musicians," in Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02),Dublin, Ireland, May 24-26, 2002.
Localization Accuracy of Advanced Spatialization Techniques in Small-Sized Concert Halls
  • E Bates
  • G Kearney
  • D Furlong
  • F Boland
E. Bates, G. Kearney, D. Furlong, and F. Boland, "Localization Accuracy of Advanced Spatialization Techniques in Small-Sized Concert Halls," in 153rd Meeting of the Acoustical Society of America, June 2007.