ArticlePDF Available

The Importance of Parameter Mapping in Electronic Instrument Design


Abstract and Figures

This paper presents a review of a series of experiments which have contributed towards the understanding of the mapping layer in electronic instruments. It challenges the assumption that an electronic instrument consists solely of an interface and a sound generator. It emphasises the importance of the mapping between input parameters and sound parameters, and suggests that this can define the very essence of an instrument. The terms involved with mapping are defined, and existing literature reviewed and summarised. A model for understanding the design of such mapping strategies for electronic instruments is put forward, along with a roadmap of ongoing research focussing on the testing and evaluation of such mapping strategies.
Content may be subject to copyright.
Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02), Dublin, Ireland, May 24-26, 2002
The importance of parameter mapping in electronic
instrument design
Andy Hunt
Music Technology Group,
Electronics Department
University of York,
Heslington, York
YO10 5DD
Marcelo M. Wanderley
Faculty of Music,
McGill University
555, Sherbrooke Street West
H3A 1E3 - Montreal - Canada
Matthew Paradis
Music Technology Group,
Music Department
University of York,
Heslington, York
YO10 5DD
In this paper we challenge the assumption that an elec-
tronic instrument consists solely of an interface and a
sound generator. We emphasise the importance of the
mapping between input parameters and system parame-
ters, and claim that this can define the very essence of an
Mapping Strategies, Electronic Musical Instruments,
Human-Computer Interaction
In an acoustic instrument, the playing interface is inher-
ently bound up with the sound source. A violin's string
is both part of the control mechanism and the sound
generator. Since they are inseparable, the connections
between the two are complex, subtle and determined by
physical laws. With electronic and computer instru-
ments, the situation is dramatically different. The inter-
face is usually a completely separate piece of equipment
from the sound source. This means that the relationship
between them has to be defined. The art of connecting
these two, traditionally inseparable, components of a
real-time musical system (an art known as mapping) is
not trivial. Indeed this paper hopes to stress that by
altering the mapping, even keeping the interface and
sound source constant, the entire character of the instru-
ment is changed. Moreover, the psychological and emo-
tional response elicited from the performer is determined
to a great degree by the mapping.
In this section we emphasise the dramatic effect that the
style of mapping can have on 'bringing an interface to
life'. We focus on our own experience in designing digi-
tal musical instruments and comment on several previ-
ous designs. An extensive review of the available litera-
ture on mapping in computer music has been presented
by the authors in [6], [16] and [17].
Informal Observations
The first author has carried out a number of experiments
into mapping. The more formal of these have been pre-
sented in detail in [5] and [3], and are summarised later
in this paper. Let us begin with some rather simple, yet
interesting, observations that originally sparked interest
in this subject. We have retained the first person writ-
ing style to denote that these are informal, personal re-
The Accidental Theremin
Several years ago I was invited to test out some final
university projects in their prototype form in the lab.
One of them was a recreation of a Theremin with mod-
ern electronic circuitry. What was particularly unusual
about this was that a wiring mistake by the student
meant that the 'volume' antenna only worked when your
hand was moving. In other words the sound was only
heard when there was a rate-of-change of position, rather
than the traditional position-only control. It was unex-
pectedly exciting to play. The volume hand needed to
keep moving back and forth, rather like bowing an in-
visible violin. I noted the effect that this had on myself
and the other impromptu players in the room. Because
of the need to keep moving, it felt as if your own energy
was directly responsible for the sound. When you
stopped, it stopped. The subtleties of the bowing
movement gave a complex texture to the amplitude. We
were 'hooked'. It took rather a long time to prise each
person away from the instrument, as it was so engaging.
I returned in a week's time and noted the irony that the
'mistake' had been corrected, deleted from the student's
notes, and the traditional form of the instrument imple-
Two Sliders and Two Sound Parameters
The above observation caused me to think about the
psychological effect on the human player of 'engage-
ment' with an instrument.
Figure 1. Simple Mapping for Experiment 1
Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02), Dublin, Ireland, May 24-26, 2002
To investigate this further I constructed a simple ex-
periment. The interface for this experiment consisted of
two sliders on a MIDI module, and the sound source
was a single oscillator with amplitude and frequency
controls. In the first run of the experiment the mapping
was simply one-to-one, i.e. one slider directly controlled
the volume, and the other directly controlled the pitch
(cf. Figure 1).
I let several test subjects freely play with the instrument,
and talked to them afterwards. In the second experimen-
tal run, the interface was re-configured to emulate the
abovementioned 'accidental Theremin'. One slider
needed to be moved in order to make sound; the rate of
change of movement controlled the oscillator's ampli-
tude. But I decided to complicate matters (on purpose!)
to study the effect that this had on the users. The pitch,
which was mainly controlled by the first slider, operated
'upside-down' to most people's expectations (i.e. push-
ing the slider up lowered the pitch). In addition the
second slider (being moved for amplitude control) was
used to mildly offset the pitch - i.e. it was cross-coupled
to the first slider (cf. Figure 2).
Figure 2. Complex Mapping for Experiment 2
A remarkable consistency of reaction was noted over the
six volunteers who tried both configurations. With Ex-
periment 1, they all commented within seconds that
they had discovered how the instrument worked (almost
like giving it a mental 'tick'; "yes, this is volume, and
this is pitch"). They half-heartedly tried to play some-
thing for a maximum of two minutes, before declaring
that they had 'finished'. Problem solved.
With Experiment 2, again there was a noted consistency
of response. At first there were grumbles. "What on
earth is this doing?" "Hey - this is affecting the pitch"
(implied cries of "unfair", "foul play"). But they all
struggled with it - interestingly for several more minutes
than the total time they spent on Experiment 1. After a
while, their bodies started to move, as they developed
ways of offsetting one slider against the other, while
wobbling the first to shape the volume. Nearly all the
subjects noted that somehow this was rewarding; it was
"like an instrument". Yet in both cases the interface
(two sliders) and the sound source (a single oscillator)
were identical. Only the mapping was altered, and this
had a psychological effect on the players.
Mapping Experiments
Several formal investigations have been carried out by
the authors in order to explore the essence and the effect
of this mysterious mapping layer.
Complex mapping for arbitrary interfaces
The first author carried out an investigation into the
psychology and practicality of various interfaces for real-
time musical performance [3]. The main part of this
study took the form of major series of experiments to
determine the effect that interface configuration had on
the quality and accuracy of a human player’s perform-
ance. The full thesis is available for download online
[15], and the details of the theory, experiments and re-
sults have been published [5]. They are summarised
here, in order to give an overview of their implications
for mapping strategies.
Three interfaces were used, and these are now described.
The first interface (cf. Figure 3) represented a typical
computer music editing interface with on-screen sliders
connected one-to-one to each sound parameter.
Figure 3. The ‘mouse’ interface
The second (cf. Figure 4) involved physical sliders (on a
MIDI module) again connected in a one-to-one manner
to the synthesis unit.
Figure 4. The ‘sliders’ interface
The third interface (cf. Figure 5) consisted of a series of
multi-parametric cross-mappings, and—like the acciden-
tal Theremin mentioned above—required constant
movement from the user to produce sound.
Rate of
Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02), Dublin, Ireland, May 24-26, 2002
Figure 5. The ‘multi-parametric’ interface
Users attempted to copy (using each interface) a series of
sounds produced by the computer. The accuracy of re-
production was recorded for each user, over several at-
tempts, spread out over a number of weeks. Results
were gathered numerically, and plotted on a series of
graphs to compare the effect - over time - of each inter-
face. These quantitative results can be summarised for
the multiparametric interface as follows:
The test scores in general were much higher than
those for the other two interfaces, for all but the
simplest tests.
There was a good improvement over time across all
test complexities.
The scores got better for more complex tests!
This last result may seem rather counter-intuitive at first
sight; that people performed better on the harder tasks.
However, this brings into question the definition of a
‘hard task’. If an interface allows the simultaneous con-
trol of many parameters, maybe it really is easier to per-
form the more complex tasks, and harder to accurately
isolate individual parameters.
A range of qualitative results was also gathered by inter-
viewing the test subjects to establish their subjective
experience of using each interface. They all concluded
that the 'mouse' interface was the most limited - as they
could see how impossible it would be to operate more
than one parameter simultaneously. Surprisingly per-
haps, they were nearly all extremely frustrated and an-
gered by the 4 physical sliders. Comments abounded
such as "I should be able to do this, technically, but I
can't get my mind to split down the sound into these 4
finger controls". Some users actually got quite angry
with the interface and with themselves. The multi-
parametric interface, on the other hand, was warmly re-
ceived - but not at the very beginning. At first it
seemed counter-intuitive to most users, but they rapidly
warmed to the fact that they could use complex gestural
motions to control several simultaneous parameters
without having to 'de-code' them into individual
streams. Many users remarked how "like an instrument"
it was, or "how expressive" they felt they could be with
Focusing on the Effect of Mapping Strategies
In the above experiment several factors may have af-
fected the results. For instance, the multiparametric
interface used cross-coupled parameters in addition to
the user's energy. It also decreased reliance on visual
feedback, and provided two-handed input, all of which
may have contributed in varying degrees to the inter-
face’s effectiveness. An additional experiment was sub-
sequently carried out by the third author to focus en-
tirely on the user's reaction to a change in mapping
These tests utilised three contrasting mapping strategies,
with a fixed user interface and synthesis algorithm. The
mappings were;
a) simple one-to-one connections between input and
b) one-to-one requiring the user's energy as an input.
This was implemented by requiring the user to con-
stantly move one of the sliders in a ‘bowing’-like action
c) many-to-many connections from input to output, but
also requiring the user’s energy as in b).
These mappings were used to control the parameters of a
stereo FM synthesis algorithm, including amplitude,
frequency, panning, modulation ratio and modulation
index. The input device used was a MIDI fader box.
Users were asked to play with each interface until they
felt they had a good sense of how to ‘drive it’ to per-
form musical gestures. No time limit was given to this
process; the users were encouraged to explore the possi-
bilities of each set-up. Data was collected on the users’
(subjective) views on the comparative expressivity and
learnability of each mapping and the accuracy of musical
control that could be achieved.
Whilst experimenting with the first mapping test (one-
to-one) many users noted that the simple division of
parameters was not very stimulating. Users tended to
learn the parameter associations very quickly but then
struggle to achieve any improvement in their perform-
ance or expressive output.
The second test generated a range of comments, which
suggested that the process of injecting energy into a
system presented a much more natural and engaging
instrument. However, due to the proximity of sliders
on the interface they found it difficult to control other
sliders whilst providing the required 'bowing' action.
However, this problem lessened over time as the user
The third and final user test (many-to-many mappings)
provided some interesting results. Most of the test sub-
jects noted that the appeal of this instrument was that it
was not instantly mastered but required effort to achieve
satisfactory results. The instrument presented a chal-
lenge to the user, as one would expect from a traditional
expressive instrument.
These tests highlighted the differences between a gen-
eral-purpose interface, (such as the mouse) which has
simple mappings but allows the user to begin working
instantly, and an interface with more complex mappings
which must be practised and explored in order to achieve
truly expressive output.
Learning from Acoustic Instruments
In [12] the second author and collaborators discussed the
fact that by altering the mapping layer in a digital musi-
Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02), Dublin, Ireland, May 24-26, 2002
cal instrument and keeping the interface (an off-the-shelf
MIDI controller) and sound source unchanged, the essen-
tial quality of the instrument is changed regarding its
control and expressive capabilities.
Previous studies, notably by Buxton [2], presented evi-
dence that input devices with similar characteristics (e.g.
number of degrees of freedom) could lead to very differ-
ent application situations depending on the way these
characteristics were arranged in the device. In that study,
however, the devices were not exactly the same me-
chanically (one had two separate controllers and the
other one two-dimensional controller), so the situation
is not the same as when using the same input device
with different mapping strategies, even if the results are
In [12], a Yamaha WX7 wind controller was used as the
input device, and sound was generated using additive
synthesis models of clarinet sounds in IRCAM’s FTS
environment (later in jMax).
The idea behind the project was simple: many wind
instrument performers complained that MIDI wind con-
trollers tend to lack expressive potential when compared
to acoustic instruments such as the clarinet or saxo-
phone. A common path to solving this problem in-
volves improving the design of the controller by adding
extra sensors. However, it was decided to challenge this
assumption and to solely work on the mapping layer
between the controller variables and the synthesis inputs
(for a complete description see [12]).
Another point became clear in this process: even if the
WX7 was a faithful model of a saxophone providing the
same types of control variables (breath, lip pressure and
fingering), these variables worked totally independently
in the MIDI controller, whereas they are cross-coupled
in acoustic single-reed instruments. This natural cross-
coupling is the result of the physical behaviour of the
reed, and since the equivalent “reed” in the controller
was a plastic piece that did not vibrate, and moreover
was not coupled to an air column, variables were simply
Based on these decisions and facts, the authors proposed
different mappings between the WX7 variables and the
synthesis parameters. The first was basically a one-to-
one relationship, where variables were independent. The
second was a model where the “virtual airflow” through
the reed (loudness) was a function of both the breath and
lip pressure (embouchure), such as in an acoustic in-
strument. The third was a model that took into account
both the “virtual airflow” and the relationship between
spectrum content to breath and embouchure; a model
that would match even more closely the real behaviour
of the acoustic instrument.
Using these three different models, the system was per-
formed by different musicians and non-musicians. Re-
sults indicated that wind instrument performers tended
to stick with complex cross-coupled mappings similar
to the single reed behaviour (the third mapping strategy
used), whereas beginners initially preferred simpler
mappings (easier to play and produce stable sounds).
lip pressure
lip pressure
lip pressure
dynamics (Y)
dynamics (Y)
dynamics (Y)
vibrato (X, relative)
vibrato (X, relative)
vibrato (X, relative)
fundamental frequency
(X, absolute)
fundamental frequency
fundamental frequency
(X, absolute)
(X, absolute)
Figure 6. Several mappings used in the clarinet
simulation presented in [12].
The two most important consequences of this work
- By just changing the mapping layer between the con-
troller and the synthesis algorithm, it was indeed possi-
ble to completely change the instrumental behaviour and
thus the instrument’s feel to the performer. Depending
on the performer’s previous experience and expectations,
different mappings were preferred.
- By deconstructing the way that the reed actually
works, it was noted that the choice of mapping could be
important as a pedagogical variable. Indeed, in stark
contrast with acoustic instruments where the dependen-
cies between parameters are unchangeable, cross-
coupling between variables can easily be created or de-
stroyed in digital musical instruments. This means that
performers could focus on specific aspects of the instru-
ment by explicitly defining its behaviour. Possible op-
tions could include:
complex (cross-coupled) control of loudness with
one-to-one control of timbre,
one-to-one loudness and complex timbre controls,
complex loudness and timbre controls, such as in
the real instrument.
Even if these results supported the essential role of
mapping (and the importance of devising mapping
strategies other than one-to-one during the design of
digital musical instruments), they could not be easily
extrapolated to more general situations. In fact, in the
above specific case, there did exist a model of complex
mapping to be followed, since the controller was a
model of the acoustic instrument. So what about map-
pings in general digital musical instruments using alter-
nate controllers, those not based on traditional acoustic
Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02), Dublin, Ireland, May 24-26, 2002
Since there will not always be ready models for inspira-
tion when designing mapping strategies for new digital
musical instruments, the task then becomes one of pro-
posing guidelines for mapping and also, if possible,
devising models that can facilitate the implementation
of mapping strategies other than simple one-to-one rela-
In trying to answer this question of how to extend a
specific mapping solution to a more general case, a
model of mapping for digital musical instruments was
proposed in [13]. It was based on the separation of the
mapping layer into two independent layers, coupled by
an intermediate set of user-defined (or “abstract”) pa-
rameters. This model was presented in the framework of
a set of extensions to jMax later known as ESCHER
(actually, a set of objects developed by Norbert Schnell
to perform interpolation using additive models).
This idea is based on previous works, such as those of
Mulder et al. [9], Métois [8], and Wessel [18]. A similar
direction was presented by Mulder and Fels in [10] and
later by Garnett and Goudeseune [7]. Basically, all these
works have used higher levels of abstraction as control
structures instead of raw synthesis variables such as am-
plitudes, frequencies and phases of sinusoidal sound
partials. The main point made in [13] was to explicitly
think about two separate mapping layers and the strate-
gies to implement these, and not on the choice of inter-
mediate parameters themselves, whether perceptive,
geometrical or “abstract” [14].
The intrinsic advantage of this model is its flexibility.
Indeed, for the same set of intermediate parameters and
synthesis variables, the second mapping layer is inde-
pendent of the choice of controller being used. The
same would be true in the other sense: for the same con-
troller and the same set of parameters, multiple synthesis
techniques could be used by just adapting the second
mapping layer, the first being held constant. Specifi-
cally in this case, the choice of synthesis algorithm is
transparent for the user
The original two-layered model has recently been ex-
panded to include three mapping layers in two inde-
pendent performance works by Hunt and Myatt [11], and
by Arfib and collaborators [1]. These works support the
idea that, by using multi-layered mappings, one can
obtain a level of flexibility in the design of instruments
and that moreover, these models can indeed accommo-
date the control of different media, such as sound and
video, in a coherent way.
One-to-one Mappings – Multiple Layers
We have noted that there is a tendency for designers to
make one-to-one mappings when constructing an inter-
face. We can use this tendency to improve the mapping
process if we utilise the many layered models outlined
above. The following scenario may illustrate this:
Imagine a system whose interface inputs included ‘but-
ton 1’,’button 2’ ‘slider 1’, slider 2’, ‘mouse x’ and
‘mouse y’. Let us suppose that the synthesis system
was a Frequency Modulation module with inputs such
as ‘carrier frequency, ‘carrier amplitude’, ‘modulation
frequency’ etc. Now consider the two possibilities be-
Case 1: let us consider a designer working to connect
the above inputs to the above outputs. We are quite
likely to see arbitrary connections such as “mouse x
controls carrier frequency”, and “slider 1 controls modu-
lation frequency”. These give us the oft-encountered
one-to-one mappings.
Case 2: let us imagine that a mapping layer has already
been devised to abstract the inputs to parameters such as
‘energy’, ‘distance between sliders’, ‘wobble’ etc. Also
let us imagine that there is a mapping layer before the
FM synthesis unit, providing higher-level control inputs
such as ‘brightness’, ‘pitch’, ‘sharpness’ etc. Now we
can picture the designer making a relationship such as
“energy controls brightness”. On the surface this may
appear to be yet another one-to-one mapping. Indeed it
is – at the conceptual level. However, when you con-
sider how ‘energy’ is calculated from the given inputs,
and how ‘brightness’ has to be converted into the FM
synthesis primitives, you will notice how many of the
lower-level parameters have been cross-coupled.
Thus the many-level mapping models are a way of sim-
plifying the design process, and of helping the designer
to focus on the final effect of the mapping, as well as
providing a convenient method of substituting input
device or synthesis method.
From the evidence presented above in both informal and
controlled experiments, there is definitely a need to
come up with better-designed mappings than simple
(engineering style) one-to-one relationships. General
models of mappings have been proposed and expanded
to incorporate multimedia control, but also to fit several
levels of performance, from beginners to highly skilled
One attempt to foster the discussion in this direction has
been initiated in the context of the ICMA/EMF Work-
ing Group on Interactive Systems and Instrument De-
sign in Music [4]. A further effort is currently being
carried out in the form of a special issue on “Mapping
Strategies for Real-time Computer Music” guest-edited
by the second author [17] to appear as volume 7, num-
ber 2 of the journal Organised Sound later this year.
We therefore welcome comments and criticism on issues
related to mapping so as to push the discussion on this
essential—although often ignored—topic.
The mapping ‘layer’ has never needed to be addressed
directly before, as it has been inherently present in
acoustic instruments courtesy of natural physical phe-
nomena. Now that we have the ability to design in-
struments with separable controllers and sound sources,
we need to explicitly design the connection between the
two. This is turning out to be a non-trivial task.
We are in the early stages of understanding the com-
plexities of how the mapping layer affects the perception
Proceedings of the 2002 Conference on New Instruments for Musical Expression (NIME-02), Dublin, Ireland, May 24-26, 2002
(and the playability) of an electronic instrument by its
performer. What we know is that it is a very important
layer, and one that must not be overlooked by the de-
signers of new instruments.
[1] Arfib, D. Personal Communication. (2002)
[2] Buxton, W. There’s more Interaction than Meets the
Eye: Some Issues in Manual Input. In Norman, D. a.
and Draper, S. W. (eds), User Centered System Design:
New Perspectives on Human-Computer Interaction,
Hillsdale, N.J.: Lawrence Erlbaum Associates, 319-337.
[3] Hunt, A. Radical User Interfaces for Real-time Musi-
cal Control. DPhil thesis, University of York, UK.
[4] Hunt, A. and Wanderley, M. M. (eds.) Mapping of
Control Variables to Musical Variables. Interactive Sys-
tems and Instrument Design in Music Working Group.
(2000) Website:
http://www.notam02.n o/i cm a/i nteract ivesy stems /mapp ing. ht ml
[5] Hunt, A., and Kirk, R. Mapping Strategies for Mu-
sical Performance. In M. Wanderley and M. Battier (eds.)
Trends in Gestural Control of Music. IRCAM – Centre
Pompidou. (2000)
[6] Hunt, A., Wanderley, M. M., and Kirk, R. Towards
a Model for Instrumental Mapping in Expert Musical
Interaction. In Proc. of the 2000 International Computer
Music Conference. San Francisco, CA: International
Computer Music Association, pp. 209-211. (2000)
[7] Garnett, G., and C. Goudeseune. Performance Factors
in Control of High-Dimensional Spaces. In Proc. of the
1999 International Computer Music Conference. San
Francisco, CA: International Computer Music Associa-
tion, pp. 268 - 271. (1999)
[8] Métois, E. Musical Sound Information: Musical
Gestures and Embedding Systems. PhD Thesis. MIT
Media Lab. (1996)
[9] Mulder, A., S. Fels, and K. Mase. Empty-Handed
Gesture Analysis in Max/FTS. In Kansei, The Technol-
ogy of Emotion. Proceedings of the AIMI International
Workshop, A. Camurri (ed.) Genoa: Associazione di
Informatica Musicale Italiana, October 3-4, pp. 87-91.
[10] Mulder, A., and Fels, S. Sound Sculpting: Manipu-
lating Sound through Virtual Sculpting. In Proc. of the
1998 Western Computer Graphics Symposium, pp. 15-
23. (1998)
[11] RIMM The Real-time Interactive MultiMedia pro-
ject. (2001) Website: ht tp:// ww w.y uk/ res/rim m/
[12] Rovan, J. B., Wanderley, M. M., Dubnov, S., and
Depalle, P. Instrumental Gestural Mapping Strategies as
Expressivity Determinants in Computer Music Perform-
ance. In Kansei, The Technology of Emotion. Proceed-
ings of the AIMI International Workshop, A. Camurri
(ed.) Genoa: Associazione di Informatica Musicale Ital-
iana, October 3-4, pp. 68–73. (1997)
[13] Wanderley, M. M., Schnell, N. and Rovan, J.
Escher - Modeling and Performing ”Composed Instru-
ments” in Real-Time. In Proc. IEEE International Con-
ference on Systems, Man and Cybernetics (SMC’98),
San Diego, CA , pp. 1080–1084. (1998)
[14] Wanderley, M. M., and Depalle, P. Contrôle ges-
tuel de la synthèse sonore. In H. Vinet and F. Delalande
(eds.) Interfaces Homme-Machine et Creation Musicale -
Hermes Science Publishing, pp. 145-163. (1999)
[15] Wanderley, M. M. (ed.) Interactive Systems and
Instrument Design in Music Workgroup. (2000) Web-
site: ht tp:// ww w.n ot am02. no/ icma/ interactiv esyst em s/w g. htm l
[16] Wanderley, M. M. Performer-Instrument Interac-
tion. Application to Gestural Control of Sound Synthe-
sis. PhD Thesis. University Paris VI, France. (2001)
[17] Wanderley, M. M., ed. Mapping Strategies for
Real-time Computer Music. Special Issue. Organised
Sound 7(2). To appear in August 2002.
[18] Wessel, D. Timbre Space as a Musical Control
Structure. Computer Music Journal, 3(2):45–52. (1979)
... In this application domain, it would be desirable to have a one-to-many or few-to-many mapping. That is, a small, easily manageable number of control parameters on the programming interface for the user to interact with, being used to change a large number of synthesis parameters on the synthesizer [116], [117]. Therefore, it becomes a dimensionality reduction challenge [118], [119]. ...
... An interpolator can then be used as a mechanism for producing new output sounds for intermediate control inputs [118]. In most cases this will be a situation where a small number of control values is being mapped to a larger number of synthesis parameters, in other words a fewto-many mapping [117]. As this is a dimension reduction problem, and a high-dimensional interpolator is required. ...
... As this is a dimension reduction problem, and a high-dimensional interpolator is required. Several authors have highlighted the importance of such mappings in the design of new musical instruments [117], [122], [124]. ...
This research investigates the use of graphical interpolation to control the mapping of synthesis parameters for sound design, and the impact that the visual model can have on the interpolator’s performance and usability. Typically, these systems present the user with a graphical pane where synthesizer presets, each representing a set of synthesis parameter values and therefore an existing sound, can be positioned at user-selected locations. Subsequently, moving an interpolation cursor within the pane will then create novel sounds by calculating new parameter values, based on the cursor position and an interpolation model. These systems therefore supply users with two sensory modalities, sonic output and the visual feedback from the interface. A number of graphical interpolator systems have been developed over the years, with a variety of user-interface designs, but few have been subject to formal user evaluation making it difficult to compare systems and establish effective design criteria to improve future designs. This thesis presents a novel framework designed to support the development and evaluation of graphical interpolated parameter mapping. Using this framework, comparative back-to-back testing was undertaken that studied both user interactions with, and the perceived usability of, graphical interpolation systems, comparing alternative visualizations in order to establish how the visual feedback provided by the interface aids the locating of desired sounds within the space. A pilot investigation compared different levels of visual information, the results of which indicated that the nature of visualisation did impact on user interactions. A second study then reimplemented and compared a number of extant designs, where it became apparent that the existing interpolator visuals generally relate to the interpolation model and not the sonic output. The experiments also provide new information about user interactions with interpolation systems and evidence that graphical interpolators are highly usable in general. In light of the experimental results, a new visualization paradigm for graphical interpolation systems is proposed, known as Star Interpolation, specifically created for sound design applications. This aims to bring the visualisation closer to the sonic behaviour of the interpolator by providing visual cues that relate to the parameter space. It is also shown that hybrid visualizations can be generated that combine the benefits of the new visualization with the existing interpolation models. The results from the exploration of these visualizations are encouraging and they appear to be advantageous when using the interpolators for sound design tasks.
... ⁷ See (Gadd & Fels, 2002;Hunt et al., 2002). ⁸ ...
... Possible avenues for improvements in gesture mapping include implementing the cross-coupled parametric approaches described in Hunt et al. (2002), as well as using the taxonomy introduced in Levitin et al. (2002) to improve and enrich control over musical events. Furthermore, implementing a focus group study in the future might be beneficial in observing and detecting emergent behaviors in user interaction with light.void~, ...
This paper discusses the strategies, considerations, and implications of designing and performing with a light-dependent digital musical interface (DMI), named light.void~. This interface is introduced as a replica of light thing, an existing DMI designed and popularized by British artist Leafcutter John. The rationale for reproducing this DMI is presented, followed by a discussion around the guiding criteria for establishing data-to-sound mappings, and the kind of affordances that these decisions may bring — including performer control, unpredictability, intentionality, spontaneity, action-sound reactivity, visual interest, and so on. The remainder of the paper focuses on dissecting the nature of this digital musical instrument, using contributions by DMI researchers Miranda and Wanderley as the main analytical framework. The outcome of this process is a semi-improvisational work titled «Umbra», along with the open source documentation for the light.void~ interface. Additionally, some relevant questions emerge with regards to performer expertise, observed vs. unobserved performance, as well as ontological frictions between instrument, composer, performer, designer, and audience.
... The same principles may also be found in musical instruments. Hunt et al. (2003) found that users preferred more complex and unpredictable mappings over simpler ones in their instrument designs. However, the performer's skill level may be a factor. ...
Full-text available
A techno-cognitive look at how new technologies are shaping the future of musicking. “Musicking” encapsulates both the making of and perception of music, so it includes both active and passive forms of musical engagement. But at its core, it is a relationship between actions and sounds, between human bodies and musical instruments. Viewing musicking through this lens and drawing on music cognition and music technology, Sound Actions proposes a model for understanding differences between traditional acoustic “sound makers” and new electro-acoustic “music makers.” What is a musical instrument? How do new technologies change how we perform and perceive music? What happens when composers build instruments, performers write code, perceivers become producers, and instruments play themselves? The answers to these pivotal questions entail a meeting point between interactive music technology and embodied music cognition, what author Alexander Refsum Jensenius calls “embodied music technology.” Moving between objective description and subjective narrative of his own musical experiences, Jensenius explores why music makes people move, how the human body can be used in musical interaction, and how new technologies allow for active musical experiences. The development of new music technologies, he demonstrates, has fundamentally changed how music is performed and perceived.
... The mapping layer attracts particular interest (e.g. Hunt, Wanderley, & Paradis, 2003, Magnusson, 2009): on acoustic instruments, the action-sound relationship is fixed by mechanical design, but DMIs allow arbitrary relationships to be created, including complex mappings involving stochastic or generative processes, or mapping-by-demonstration using machine learning (e.g. Fiebrink, 2011). ...
... At the heart of mapping strategies are coupled components. From acoustic instruments we learn that the relationship between the bow, the strings and the soundbox is inseparable and these act as both the control mechanism and the sound generation (Hunt et al., 2003). These types of mapping schemes can be extended beyond this, and there have been proposed four classic categories for mapping strategies: ...
Full-text available
This thesis is about sound and space, and is an exploration of sounds and spaces using Pierre Schaeffer’s sound object theory. It addresses aesthetic and experimental approaches to the exploration of spatial audio and site-specific practices through the intrinsic and extrinsic features of sound objects. These experimental approaches make use of software tools for composition, installation, spatial programming, and sound design, as well as for virtual reality simulation. The main contribution of the thesis is an exploration of the relationships between sound and space, going beyond the technical issues of the spatialisation paradigm and into issues of place, site, and landscape, as guiding principles for spatial audio practices. The ambisonic soundfield is in this thesis seen as a link between sound objects and spatialisation of sound masses, sharing the same multidimensional space. The thesis aims to study the various features of sound objects through a multi- dimensional model where we can access main features as well as sub-features, and sub-sub-features, of sound objects. This thesis is divided into four parts, where the first three parts discuss different aspects of the object–structure relationship, and where the last part is a discussion of possible extensions of Schaeffer’s typo-morphological system of identification, classification, and description of sound to encompass spatial features.
... Since there is a broad range of possible two-hand gestures that can serve as data input, we decided to use a one-to-one mapping strategy on both audio and visual layers to simplify the design process [11]. Because of the interconnectedness among ten fingers, movements on x, y, and z positions, as well as the limitation and interference due to palm orientations and positions as a whole, even the simplest one-to-one mapping can produce a rich sonic and visual result through hand movements. ...
Conference Paper
Full-text available
This paper presents the concept of Embodied Sonic Meditation (ESM) and its proof-of-concept art installation entitled "Resonance of the Heart." ESM artistically explores the theories of "embodied cognition" and "deep listening." The goal of this artistic practice is to improve laypersons' comprehension of the relationship between body gestures, sounds, and visuals. To practice this approach, we designed and built a real-time audiovisual interactive system. This system uses an infrared sensing device and touchless hand gestures to produce various sonic and visual results. An artificial neural network was implemented to track and estimate the performer's subtle hand gestures using the infrared sensing device's output. Six sound filtering techniques were implemented to simultaneously process audio based on the gesture. Selected Mudra hand gestures were mapped to seven 4-dimensional Buddhabrot fractal deformations in real-time. This project was applied in both college teaching and public art installation. It connects Eastern philosophy to cognitive science and mindfulness practice. It augments multidimensional spaces, art forms, and human cognitive feedback. It disrupts the boundary between cultural identities, machine intelligence, and universal human meaning.
This article explores the ways specific hardware and software technologies influence the design of musical instruments. We present the outcomes of a compositional game in which music technologists created simple instruments using common sensors and the Pure Data programming language. We identify a clustering of stylistic approaches and design patterns, and we discuss these findings in light of the interactions suggested by the materials provided, as well as makers' technomusical backgrounds. We propose that the design of digital instruments entails a situated negotiation between designer and tools, wherein musicians react to suggestions offered by technology based on their previous experience. Likewise, digital tools themselves may have been designed through a similar situated negotiation, producing a recursive process through which musical values are transferred from the workbench to the instrument. Instead of searching for ostensibly neutral and all-powerful technologies, we might instead embrace and even emphasize the embedded values of our tools, acknowledging their influence on the design of new musical artifacts.
Full-text available
Since the advent of real-time computer music environments, composers have increasingly incorporated DSP analysis, synthesis, and processing algorithms in their creative practices. Those processes became part of interactive systems that use real-time computational tools in musical compositions that explore diverse techniques to generate, spatialize, and process instrumental/vocal sounds. Parallel to the development of these tools and the expansion of DSP methods, new techniques focused on sound/musical information extraction became part of the tools available for music composition. In this context, this article discusses the creative use of Machine Listening and Musical Information Retrieval techniques applied in the composition of live-electronics works. By pointing out some practical applications and creative approaches, we aim to circumscribe, in a general way, the strategies for employing Machine Listening and Music Information Retrieval techniques observed in a set of live-electronics pieces, categorizing four compositional approaches, namely: mapping, triggering, scoring, and procedural paradigms of application of machine listening techniques in the context of live-electronics music compositions.
Full-text available
The present work consists of a study about compositional strategies in the use of Machine Listening (ML) and Music Information Retrieval (MIR) techniques applied to live-electronic music, proposing a technical, contextual, and creative study. The research starts from a theoretical foundation that seeks to discuss and contextualize the emergence and development of the ML and MIR as interdisciplinary research fields, as well as approaches concepts related to interactive musical systems, interactive music and emphlive-electronics. The theoretical foundation also extends to the review of a set of techniques, processes, and tools, aiming to elucidate the main concepts in the implementation of ML/MIR methods. Subsequently, the investigative study covers some general observations about the main characteristics of musical writing in interactive music/live-electronics associated with the use of the studied tools. With that, we point out some practical applications and compositional approaches in a set of pieces from the repertoire. Finally, the last part of the work discusses the creative process of a composition for acoustic guitar and live-electronics, approaching particular strategies for employing ML/MIR techniques.
Full-text available
Current input device taxonomies and other frameworks typically emphasize the mechanical structure of input devices. We suggest that selecting an appropriate input device for an interactive task requires looking beyond the physical structure of devices to the deeper perceptual structure of the task, the device, and the interrelationship between the perceptual structure of the task and the control properties of the device. We affirm that perception is key to understanding performance of multidimensional input devices on multidimensional tasks. We have therefore extended the theory of processing of percetual structure to graphical interactive tasks and to the control structure of input devices. This allows us to predict task and device combinations that lead to better performance and hypothesize that performance is improved when the perceptual structure of the task matches the control structure of the device. We conducted an experiment in which subjects performed two tasks with different perceptual structures, using two input devices with correspondingly different control structures, a three-dimensional tracker and a mouse. We analyzed both speed and accuracy, as well as the trajectories generated by subjects as they used the unconstrained three-dimensional tracker to perform each task. The result support our hypothesis and confirm the importance of matching the perceptual structure of the task and the control structure of the input device.
Full-text available
Digital musical instruments do not depend on physical constraints faced by their acoustic counterparts, such as characteristics of tubes, membranes, strings, etc. This fact permits a huge diversity of possi-bilities regarding sound production, but on the other hand strategies to design and perform these new instruments need to be devised in order to provide the same level of control subtlety available in acous-tic instruments. In this paper I review various topics related to gestural control of music using digital musical instruments and identify possible trends in this domain.
Thesis (Ph. D.)--University of York, 1999.
Conference Paper
Graspable UIs advocate providing users concurrent access to multiple, specialized input devices which can serve as dedicated physical interface widgets, affording physical manipulation and spatial arrangements (2, 4). We report on an experimental evaluation comparing a traditional GUI design with its time-multiplex input scheme verses a Graspable UI design having a space-multiplex input scheme. Specifically, the experiment is designed to study the relative costs of acquiring physical devices (in the space-multiplex conditions) verses acquiring virtual logical controllers (in the time-multiplex condition). We found that the space-multiplex conditions out perform the time- multiplex conditions due to a variety of reasons including the persistence of attachment between the physical device and logical controller. In addition, we found that the use of specialized physical form factors for the input devices instead of generic form factors provide a performance advantage. We argue that the specialized devices serve as both visual and tactile functional reminders of the associated tool assignment as well as facilitate manipulation due to the customized form factors.