Conference PaperPDF Available

Democratising DMIs: the relationship of expertise and control intimacy

Authors:

Abstract and Figures

An oft-cited aspiration of digital musical instrument (DMI) design is to create instruments, in the words of Wessel and Wright, with a 'low entry fee and no ceiling on virtuosity'. This is a difficult task to achieve: many new instruments are aimed at either the expert or amateur musician, with few instruments catering for both. There is often a balance between learning curve and the nuance of musical control in DMIs. In this paper we present a study conducted with non-musicians and guitarists playing guitar-derivative DMIs with variable levels of control intimacy: how the richness and nuance of a performer's movement translates into the musical output of an instrument. Findings suggest a significant difference in preference for levels of control intimacy between the guitarists and the non-musicians. In particular, the guitarists unanimously preferred the richest of the two settings whereas the non-musicians generally preferred the setting with lower richness. This difference is notable because it is often taken as a given that increasing richness is a way to make instruments more enjoyable to play, however, this result only seems to be true for expert players.
Content may be subject to copyright.
Democratising DMIs: the relationship of expertise and
control intimacy
Robert H. Jack, Jacob Harrison, Fabio Morreale, Andrew McPherson
Centre for Digital Music
Queen Mary University of London
London, UK
(r.h.jack)(j.harrison)(f.morreale)(a.mcpherson)@qmul.ac.uk
ABSTRACT
An oft-cited aspiration of digital musical instrument (DMI)
design is to create instruments, in the words of Wessel and
Wright, with a ‘low entry fee and no ceiling on virtuosity’.
This is a difficult task to achieve: many new instruments
are aimed at either the expert or amateur musician, with
few instruments catering for both. There is often a balance
between learning curve and the nuance of musical control
in DMIs. In this paper we present a study conducted with
non-musicians and guitarists playing guitar-derivative DMIs
with variable levels of control intimacy: how the richness
and nuance of a performer’s movement translates into the
musical output of an instrument. Findings suggest a signif-
icant difference in preference for levels of control intimacy
between the guitarists and the non-musicians. In particular,
the guitarists unanimously preferred the richest of the two
settings whereas the non-musicians generally preferred the
setting with lower richness. This difference is notable be-
cause it is often taken as a given that increasing richness is
a way to make instruments more enjoyable to play, however,
this result only seems to be true for expert players.
Author Keywords
expertise, learning, control intimacy, richness, sensorimotor
skill
CCS Concepts
Applied computing Sound and music computing;
Performing arts;
1. INTRODUCTION
Within NIME and related research fields, much discussion
has centred around richness of control, the level of detail
of control that a performer has over an instrument. It has
been proposed for many years that richer mappings between
control input and sound output are better [25], and much
design effort has gone into this idea. Yet in practice, the
design of digital musical instruments (DMIs) involves bal-
ancing two factors that can often seem at odds with one an-
other: the steepness of the learning curve that a performer
has to climb to make music with an instrument, and con-
trol intimacy, how the richness and nuance of a performer’s
movement translates into musical output.
Licensed under a Creative Commons Attribution
4.0 International License (CC BY 4.0). Copyright
remains with the author(s).
NIME’18, June 3-6, 2018, Blacksburg, Virginia, USA.
In this paper we aim to interrogate the idea that ‘a richer
instrument is a better instrument’. We designed a study in-
volving two different groups of players (guitarists and non-
musicians), which compared two levels of richness (audio-
rate coupling between strings and sound model, and note-
based triggers) within the same guitar-derivative DMI. We
found that although the experienced musicians unanimously
preferred the richer instrument, this appears to be only true
for this group. The non-musicians were more ambiguous in
their preference for the instruments, but perhaps surpris-
ingly, tended towards the less rich instrument overall. This
paper aims to clarify the processes that led to this prefer-
ence by analysing interviews conducted with the performers
and their gesture language with different levels of richness.
2. RELATED WORK
Theories of sensorimotor skill acquisition generally agree
that the process passes through a number of qualitatively
different stages as the learner progresses from novice to
expert [2, 7]. The stage-based approach is assumed to
hold across different skill development domains [5] includ-
ing learning musical instruments [12]. A classic model of
the stage-based approach to skill acquisition is by Fitts [6]
which consists of three stages:
cognitive: the performer has a task broken down into
small components that are not related to the whole.
Performance characterised by high error, high vari-
ability and a detachment of the individual components
from the whole task.
associative: follows an extended period of deliberate
practice. The performer can associate actions with
successful results, makes fewer errors, becomes more
aware of her errors through a (partial) understanding
of the whole task.
autonomous: reached through further deliberate prac-
tice. The performer can carry out the components of
the skilled action at a faster pace and without con-
scious attention and can focus on higher-level aspects
of the task.
Learning an instrument involves internalising how action
translates to sound, which is initially acquired by explor-
ing and manipulating the instrument with somewhat ar-
bitrary actions that lead to unexpected results [9], what
Wessel called the ‘babbling’ stage due to its similarities to
the manner in which young children learn to form words
[24]. A process of experimentation, exploration, repetition
and association, builds an internal model that captures the
relationships of the actions the instrument affords and the
resultant sound [12]. Once the musician progresses to a level
of expertise they do not need to focus attention on the in-
dividual operations of manipulating an instrument, instead
focusing on higher-level musical intentions: the instrument
becomes a ‘natural’ extension of the musician’s body and
no longer an obstacle to an embodied interaction with the
music [19].
Traditional musical instruments are often given as exam-
ples of tools where the relationship of gesture and sound
is both intuitive and complex: Dobrian and Koppelman
[4] emphasise the importance of relating DMI design to
its acoustic ancestry for this reason. Unlike acoustic in-
struments, DMIs can boost a performer’s ability to achieve
musically complex results by partially shifting the respon-
sibility for sound production away from their sensorimotor
ability and to the digital system. How intuition, complex-
ity and the development of expertise are balanced in DMI
design has been the focus of much research in this field.
2.1 Instrument efficiency
Efficiency in regards to musical instruments can be consid-
ered as the instrument’s ability to transfer input gestures to
musical sound [11]. Jorda posits that the kalimba, at least
in the first stages of learning, is a more efficient instrument
than the piano. Whereas a piano has many notes a kalimba
has few and they are all the ‘right notes’; its form intuitively
encourages the performer to play with their thumbs; once
the kalimba is held in both hands it is clear which thumb
should be used to control which notes. So instrument effi-
ciency depends on the relationship between the complexity
of the input gestures and the complexity of the resultant
sound output, and the ease with which a performer can get
from one to the other. Jorda illustrates the balance between
challenge, frustration and boredom by comparing these in-
struments and suggests that new instruments should adhere
to Wessel and Wright’s ‘low entry fee with no ceiling on vir-
tuosity’ [25].
Pardue [20] states ‘it is in learning where DMIs can have
an inherent advantage over traditional instruments’, propos-
ing the term complexity management to describe the notion
that instrument efficiency can be progressively managed
over time in order to maintain a rewarding learning expe-
rience at all levels of expertise. By guiding the novice user
towards less complex musical output, certain techniques can
be isolated for technical practice. This approach could also
provide more immediate access to less formal musical prac-
tice such as improvising or ‘jamming’ with other musicians,
activities which McPherson and McCormick [15] show help
with the level of cognitive engagement during musical prac-
tice.
2.2 Designing affordances
When designing DMIs the process is often conceptualised
as the creation of affordances and constraints towards music
making [13]. In a recent paper discussing embodied control
in HCI, Tuuri et al. [23] distinguish between push effects
which force or guide the user to particular choices, versus
pull effects which relate to an ease of conceiving how ac-
tion relates to a particular output, as characteristics of an
instrument’s affordance structure. Through these effects a
performer is provided with control of an instrument, while
equally the instrument enforces control on the performer.
Jack et al. [10] propose a model of DMIs that is based
on projection from a high-dimensional space of body move-
ment to a reduced space of sonic and kinematic features
and behaviours. In this model the instrument represents
a bottleneck in the flow of gestural information, which is
positioned through design choices. In this study, we are
investigating the influence of different widths of bottleneck
on a performer’s experience – each of the instruments we
created allow different levels of detail to be carried into the
sound output, and we are interested in how this folds back
on the gestures that the performers use.
2.3 Intimate control and expression
Control intimacy, as first introduced by Moore [17], can
be described as a performer’s perceived match of the be-
haviour of an instrument and their psychophysiological ca-
pabilities when controlling the instrument. Intimate instru-
ments (such as the voice, violin, sitar, flute) allow the micro-
gestural movements of the performer to create a wide range
of affective variation in the control of musical sound. Moore
identifies MIDI as suffering from a deficit of intimate control
in its discretisation of musical performance into a sequence
of note-on, note-off with velocity. Wessel and Wright ex-
pand on Moore’s notion of control intimacy and its relation-
ship to virtuosity, stating that ‘a high degree of control in-
timacy can be attained with compelling control metaphors,
reactive low latency variance systems, and proper treatment
of gestures that are continuous functions of time’ [25][p. 2].
They propose that high control intimacy can encourage per-
formers to continually develop their skill on an instrument
and their personal style: low intimacy implies that the com-
munication between a performer and a device is poor and
there is a barrier to expressive performance, and hence en-
gagement.
3. STUDY
In this study, we aimed to ask ‘does a richer instrument nec-
essarily mean a better instrument?’, and explore how prior
experience and expertise relates to this. Specifically, we
aimed to investigate the effects of modulating the richness
of a plucked-string guitar-based instrument on the experi-
ence and gestural language of guitarists and non-musicians.
3.1 Instrument design
We designed two guitar derivative DMIs with different form
factors but identical sensor topologies, based on real guitar
strings with piezo sensors. One follows the shape and form
of a guitar and is held with a strap, with the right hand
resting over the strings and left hand holding the neck. The
‘tabletop’ version was designed to follow design cues from
boutique music hardware such as modular synth controllers.
Figure 1 shows the two instruments.
Figure 1: Two instruments used for this study: gui-
tar form (L) and tabletop form (R)
3.1.1 Physical construction
Both instruments were designed to be played using simi-
lar techniques to a guitar or other strummed string instru-
ments. They feature six buttons mapped to six chords. The
buttons on the guitar form are placed on the neck, around
where the lower frets would be on a traditional guitar neck.
The tabletop version features the buttons on the lower left
corner, with the strings placed at a 45 degree angle, as this
was found to be the most comfortable for strumming with
the right hand. Both instruments featured a method of
switching between the two sensor mappings described be-
low.
The physical construction of the string modules involves
six short lengths of .040 gauge bass guitar string held loosely
over a ‘strummable area’ of about 10 cm. At one end, the
strings are terminated over a block of felt-covered foam,
with six individual bridge-pieces at the other with inte-
grated piezo disc sensors, and held to a low tension using
adjustable zither pins. This provides a strong acoustic sig-
nal when strummed or plucked, similar to the attack of a
plucked string on a guitar. The thickness of the strings
and low tension provides a short decay and fewer resonant
properties than a typical guitar string held to tension.
3.1.2 Sensor mappings
Both variations of the instrument use the Karplus-Strong
algorithm to simulate six virtual strings which, individually
excited using signals from the individual piezo sensors. We
used a Bela board [14] for the sensing and string modelling
to create a high-performance, low latency embedded instru-
ment in each case. The only difference between the two
variations is the mapping of the piezo signal to the exci-
tation of the virtual strings. The two mapping structures
used are ‘audio-rate’ excitation and ‘sample triggering’:
Audio-rate excitation: Excitation of a virtual string
model using a real-time audio signal has been implemented
and documented in previous NIME research, including the
Kalichord [21], BladeAxe and PlateAxe [1] and Caress in-
struments [16]. Such instruments allow intuitive control
over the resulting sound by varying the way the physical
model is excited (plucking hard or soft, or with different ma-
terials). Our instruments follow a similar principle, but use
dampened strings terminated over piezo sensors to drive the
virtual string models. This allows the use of natural strum-
ming and plucking gestures, as well as less traditional ones
such as tapping, scraping or stroking the strings, which have
musically meaningful results in the resulting audio signal.
Sample triggering: The sample triggering version uses
the same synthesis technique but dramatically reduces the
amount of achievable variation in input signal. Rather than
passing the audio signal directly to the virtual string al-
gorithm, a peak detection algorithm is used to trigger a
pre-recorded pluck recording whenever an amplitude peak
is reached. The recorded pluck was taken directly from the
piezo audio signal, so is directly comparable with the audio-
rate version, however the timbre and dynamics remain static
independent of input gesture.
3.2 Study Design
Participants were asked to compare two versions of the
same instrument. We used a combination of improvisa-
tion/exploration and prescribed musical tasks. We also col-
lected qualitative data using questionnaires and structured
interviews at two points during the study and audio and
video recorded the whole session.
We recruited 32 Participants (16 ‘competent’ guitarists
and 16 non-musicians). Participants were asked to self-
identify at the recruitment stage using the following state-
ments: ‘you are comfortable strumming and playing along
to a tune’ (competent guitarists) and ‘you have no or very
little experience playing an instrument’. We asked par-
ticipants to complete the self-report questionnaire section
of the Goldsmiths Musical Sophistication Index (GoldMSI)
test battery [18].
The full study was designed to investigate several factors,
some of which are not relevant to this paper. This involved
a comparison of the physical form of the instruments (the
tabletop and guitar-shaped forms described in Section 3.1),
as well as sensor topologies (the string modules described
previously, and a touch sensor variation). The results of
the comparison between form and sensor topology are pre-
sented elsewhere [8]. In this paper, we are concerned only
with those instruments which featured the ‘string’ sensor
topology. The results presented here concern the compar-
ison of these instruments in either ‘sample-triggering’ or
‘audio-rate’ variations.
3.2.1 Musical tasks
Participants were instructed first to improvise and explore
with the instrument in the sample-triggering setting for
seven minutes. They were then given a further seven min-
utes to rehearse and perform an accompaniment to a record-
ing of a folk song. We recorded a piece taken from the folk-
RNN songbook [22] for this purpose, arranged for fiddle and
electric bass. The chord structure of the song used chords I,
IV and V in the key of G. We added coloured stickers to the
buttons to indicate these chords and printed a colour-coded
score for participants to follow while playing. We also pro-
duced a video file displaying the chord colours and positions
on screen as they appeared in the score, in a similar manner
to the Guitar Hero games. Participants were allowed to use
either or both of these methods to follow the backing track
but were encouraged to use the printed score if they felt
comfortable to do so. The buttons and score are presented
in Figure 2.
Figure 2: L-R: colour-coded paper score, screenshot
of on-screen chord visualiser, colour-coded buttons
For the final musical task, we instructed the participants
to switch to the audio-rate variation of the instrument us-
ing a switch on the instrument’s enclosure. They were then
given ten minutes to improvise and explore with the instru-
ment. No further score-following tasks were given.
3.2.2 Questionnaire and structured interviews
After the score following task we conducted structured in-
terviews with the participants asking questions related to
the techniques they used and their overall impression of
the instrument in the sample-triggering variation. Follow-
ing the final musical task with the audio-rate variation, an-
other structured interview took place, this time focusing
specifically on differences and similarities between the two
settings. We then asked participants to indicate their pref-
erence for either setting using a horizontal on-screen slider
with ‘setting 1’ (sample-triggering) on the left and ‘setting
2’ (audio-rate) on the right. This produced a value from
0-100, with 0 indicating strong preference for setting 1, and
100 indicating strong preference for setting 2.
4. FINDINGS
Our findings consist of a quantitative measure from the com-
paritive rating of each setting and a pair of thematic anal-
yses: the first on the interviews conducted with the partic-
ipants, the second on their gestural language while playing
the instrument.
4.1 Participant data
We recruited 32 participants, 16 in each group. 19 partic-
ipants were male (13 guitarists and 6 non-musicians), and
13 were female (3 guitarists and 10 non-musicians). Par-
ticipant age ranged from 18 to 62 with an average 32 years
old. The average GoldMSI scores for each group were 89
(SD = 11, minimum = 72) for guitarists and 55 (SD =
11, maximum = 70) for non-musicians. The minimum and
maximum show the proximity of the two groups.
4.2 Ratings
Figure 3 shows the comparative rating of the settings by
each group. A paired t-test on the comparative ratings of
the settings from each group found a significant difference
between groups (t = 5.6833, df = 16.731, p <.01). All
16 guitarists rated the audio-rate setting as better, whereas
there was more disagreement in the non-musician category
with 6 rating audio-rate as better and 10 rating sample-
triggering as better. We also tested for an effect of the dif-
ferent global forms (guitar-shaped vs. tabletop) but found
no significant effect, meaning that the difference in rating
was driven by the difference between the settings.
Figure 3: Median and IQR ratings of setting 1
(sample-triggering represented by 0 on the y-axis)
and setting 2 (audio-rate represented by 100 on the
y-axis) for all 32 participants.
4.3 Participant reasoning
We performed a thematic analysis on the transcripts from
the structured interview with a focus on reasoning in rela-
tion to the following themes: sound, technique, instrument
behaviour, relation to existing instruments or interfaces.
Table 1 presents some sample quotes that are representative
of the reasoning of each group.
The 6 guitarists were quick to critique the sample-triggering
variation at the end of their first session, even without the
knowledge that a richer mapping would be introduced later
in the study. Most of the comments focused on the lack
of timbral expression and the flatness of the articulation
on the instrument in comparison to what they were used
to on a traditional guitar. The addition of these capabili-
ties with the audio-rate setting was mentioned by 12 par-
ticipants in this group. The audio-rate setting’s ability to
support existing technique was also a reoccurring theme,
with particular reference to fingerpicking and articulation.
6 members of this group also made reference to the ‘feel’ of
the instrument as more ‘guitarry’ or ‘natural’ in comparison
to sample-triggering: “setting 2 really uses your knowledge
of guitar. Compared to setting 1 where the strings are not
behaving as strings.
The non-musicians who preferred sample-triggering gen-
erally reported differences between the two settings related
to ‘clarity’ and ‘power’ in comparison to the more ‘fragile’
or ‘far away’ audio-rate setting. Sample-triggering was de-
scribed by 9 in this group as easier to generate sound with:
comparisons of the force required to create sound from the
instrument were mentioned, with sample-triggering com-
mended for its ability to create a loud sound with little
effort using a diverse set of playing techniques, whereas the
audio-rate setting was referred to as difficult, hard, or re-
quiring too much pressure. There were also 4 non-musicians
(1 who preferred sample-triggering, 3 who preferred audio-
rate) who stated that they noticed very little or no differ-
ence between the two settings and so just went with their
gut feeling.
4.4 Techniques
We performed a further thematic analysis of the video footage,
focusing on identifying the different sets of gestures each
participant used in their right hand during the musical tasks.
These observations are presented in Table 2. Our inter-
est was in comparing the diversity of gesture usage in each
group to identify correlations with their given preference.
From Table 2 we can see that there was no clear dis-
tinction between the groups in terms of overall diversity of
gestures: both used a similar variety and spread of gestures
although the more ‘guitar-like’ techniques (strumming as if
holding a plectrum, finger-style) occurred more frequently
with the guitarists.
5. DISCUSSION
Our findings complicate the notion that ‘a richer instrument
is a better instrument’. The guitarists were unanimous in
their preference for the richer setting, which is unsurprising
as the audio-rate setting more accurately translates the ex-
isting techniques of guitar players to musically meaningful
timbral and dynamic effects. What was less expected was
the ambiguity of response amongst non-musicians, and their
tendency towards the less rich instrument overall.
From the structured interviews we can piece together a
picture of why this difference in opinion might exist: gui-
tarists were able to speak at length of a lack of detail, flat-
tening of nuance and general shortcomings of the triggering
routine. For this group there was a wasted reserve of ges-
tural potential that had no effect on the musical output of
the instrument. Many of the non-musicians however, were
complimentary of the sample-triggering setting for its clar-
ity and strength of sound. This group frequently spoke of
the audio-rate setting as quieter (which was true for soft
playing but if sufficient energy was put into the instrument
then it could be louder than the sample-triggering), more
delicate, and harder to produce a satisfactory sound with.
5.1 Efficiency and richness
If we return to Jorda’s notion of instrument efficiency [11]
we could say that the sample-triggering setting is more effi-
cient than the audio-rate setting. The complexity of musi-
cal input is matched for both settings: both have reasonable
coverage of the techniques used to play guitar, and the phys-
ical layout and dynamic material behaviour of the strings
do not change with the settings, supporting the same base
of gestures. It is in the musical output complexity that the
settings differ, with the sample-triggering setting project-
ing a rich and nuanced set of input gestures to a reduced
set of musical features in the sound output. The audio-rate
setting retains the spectral relationship between input and
Table 1: Selected responses from the structured interview.
Timbre Amplitude Technique Realism Behaviour
Guitarists
“There was a def-
inite tonal differ-
ence”
“with setting 2
you can play
soft, and you
can play hard”
“all the things that I do
worked on setting 2 but
didn’t necessarily work on
setting 1”
“It really uses your knowledge
of guitar. Compared to setting
1 where the strings are not be-
having like strings”
“Setting 2 was responding
to touch much more deli-
cately”
Non-musicians
“Setting 1 was like
listening to a guitar
in a concert, but set-
ting 2 was more like
listening to some-
thing on my com-
puter”
“Setting 1 was
louder and
brighter to me”
“You need to put more
pressure with your hands
with setting 2, it’s harder
to generate the sounds
while with setting 1 you
can just touch the strings
and create the sound”
“There wasn’t much difference
between the two, but with the
first setting everything I did
made a difference, with the
second one everything sounded
more fragile in a way.”
“My tapping didn’t trig-
ger the string rather, what
I was triggering was the
sound of the string itself
Table 2: Analysis of gesture occurrence for each group and each setting. Column corresponds to number of
participants who used each gesture under each setting. The gesture analysis of the free improvisation with
setting 1 (sample-triggering) and setting 2 (audio-rate).
Non-musician Guitarist
Setting: 1 2 1 2
Strumming
Strumming with a single finger 14 12 9 4
Strumming with hand like holding plectrum 7 6 13 11
Strumming with multiple fingers 5 5 2 2
Rake / ‘flamenco style’ strum 2 1 1 1
Plucking
Slow plucking with fingers / thumb 16 12 11 1
Fingerpicking (finger-style) 8 8 13 11
Plucking with hand like holding plectrum 1 3 2 0
Scratching/Tapping
Tapping individual strings 10 6 8 3
Tapping multiple strings with flat finger 6 2 4 3
Pushing down on strings 6 1 2
Scratching / swiping strings 2 8 4 3
Tapping bridge pieces 1 2 6 4
Testing
Damping strings / attempting to damp strings with palm 6 4 4 2
Strum / pluck / tap at different points along the string 6 3 4 4
Plucking while pressing down/muting strings 1 5 4
Observably testing triggering threshold 3 N/A 4 N/A
Observably testing dynamic range N/A 2 N/A 3
output and so requires nuanced control of the strings in or-
der to get a nuanced output, whereas the sample-triggering
setting can work with a much lower level of definition at the
input: any energy above a certain threshold is transformed
into an impulse and the spectral signature of that gesture is
disregarded. The bottleneck [10] that the instrument repre-
sents to the interaction can be imagined as wider in this case
in comparison to sample-triggering: more of the performers’
gestural language can be projected through the instrument.
A wider bottleneck might inherently mean lower efficiency:
the fact that a greater amount of control input complexity
can affect the output means that the performer becomes
responsible for that extra level of control.
This can also be viewed in terms of required effort. Both
settings transfer energy from input to output (the kinetic
energy of the performer to the sound energy of the instru-
ment). Effort relates to the total amount of energy needed
to achieve a result. The sample-triggering setting required
much less physical effort to achieve an equivalent note to the
audio-rate setting. While effortful interaction is proposed
as a valuable thing in DMIs [3], in the case of non-musicians
here the relative lack of effort was perceived as attractive.
5.2 Gestural behaviour
The similar spread in gestures between both groups, along-
side the stark difference in opinion in terms of setting pref-
erence, shows how the affordances and constraints of the
instrument were found in a similar manner for both groups,
but meant different things to them depending on experience.
The non-musicians were concerned with efficiency of sound
production and the easiest way to generate a note. With
the sample-triggering setting they were quickly drawn to the
pull effects of the instrument – the places where they were
enabled in their control [23]. With the audio-rate setting the
instrument had a different affordance structure which, for
many in this group, manifested as push effects – they were
forced to use a particular set of gestures requiring more
nuance and effort than they were capable of to achieve a
satisfactory musical result.
5.3 Learning curves
For the experienced guitarists that are used to navigating
input complexity there is no advantage in reducing it, in fact
many of the guitarists negatively reported the lack of output
complexity from the sample-triggering setting. In the case
of the non-musicians there does however seem to be a use in
reducing the output complexity (and hence increasing the
efficiency of the instrument). This could be partly due to
the learning curve that each instrument has. Whereas the
guitarists already know a large amount of techniques that
can be used to play the guitar, and so are able to quickly
make sense of the audio-rate setting, non-musicians want a
more direct route to producing musically satisfying sound
with the instrument and so prefer the faster learning curve
of the more efficient sample-triggering. We can consider the
switch between settings as a kind of ‘complexity manage-
ment’ [20] that provides non-musicians with a shortcut to
taking part in a musical activity as a performer.
5.4 Study design
There is a possible effect of the study design on our obser-
vations from the fact that Setting 1 came first. Guitarists
complained about the limitations of Setting 1 before hav-
ing knowledge that a richer setting was coming later in the
study, which is a good support for our findings. With the
non-musicians, some of the effects could be explained by the
fact that they were already familiar with Setting 1 when
they switched to Setting 2. Differences in musical expertise
(from GoldMSI) within the two groups is another point that
could benefit from further investigation.
6. CONCLUSIONS
In this paper we explored the notion of control intimacy and
how it relates to instrumental expertise and prior experi-
ence. We set up a study to explore the effects of increasing
the ‘richness’ of an instrument for both experienced mu-
sicians and novices. Our results support the notion that
for experienced musicians, richer instruments are preferable
and more suitable for performance. This is possibly due to
the preservation of the full spectrum of gestures that have
a meaningful effect on the musical output. When designing
for non-musicians, however, the role of richness is less clear.
There was a greater spread in overall preference for the two
instrument variations, with a tendency towards the less rich
version. Viewing this variation as the more musically effi-
cient of the two settings, especially in the hands of a non-
musician, our findings suggest that an instrument which
requires less physical and mental effort from the player can
lead to more enjoyable experiences for beginners even if it
has a more restricted space of possibilities.
In this paper we have only analysed the ‘first contact’ a
performer has with an instrument rather than a longitudinal
evolution of performer and instrument. Going back to the
notion of ‘complexity management’ and the value of learn-
ers taking part in more informal, less technique-oriented
musical situations, we might find a compelling argument
for introducing similar methods of adjusting the richness
on future instruments to streamline non-musicians into mu-
sical contexts as performers. Future studies on this subject
might therefore incorporate a longitudinal approach and a
sliding scale of control intimacy.
7. ACKNOWLEDGMENTS
We would like to extend our gratitude to Ailish Underwood
for crafting the guitar body and to the participants that
took part in this study. This work is supported by EPSRC
under grants EP/N005112/1 (Design for Virtuosity) and
EP/G03723X/1 (DTC in Media and Arts Technology).
8. REFERENCES
[1] Augmenting the iPad: the BladeAxe, author=Michon,
Romain and Smith, Julius O and Wright, Matthew
and Chafe, Chris, booktitle=Proc. NIME, year=2016.
[2] J. R. Anderson. Acquisition of cognitive skill.
Psychological review, 89(4):369, 1982.
[3] P. Bennett, N. Ward, S. O’Modhrain, and P. Rebelo.
Damper: a platform for effortful interface
development. In Proc. NIME, 2007.
[4] C. Dobrian and D. Koppelman. The ‘E’ in NIME:
musical expression with new computer interfaces. In
Proc. NIME, 2006.
[5] U. Eversheim and O. Bock. Evidence for processing
stages in skill acquisition: a dual-task study. Learning
& Memory, 8(4):183–189, 2001.
[6] P. M. Fitts and M. I. Posner. Human performance.
1967.
[7] A. Gentile. Skill acquisition: Action, movement, and
neuromotor processes. Movement science, pages
111–187, 2000.
[8] J. Harrison, R. H. Jack, F. Morreale, and
A. McPherson. When is a guitar not a guitar?
cultural form, input modality and expertise. In Proc.
NIME, 2018.
[9] B. Hommel. Acquisition and control of voluntary
action. Voluntary action: Brains, minds, and
sociality, pages 34–48, 2003.
[10] R. H. Jack, T. Stockman, and A. McPherson. Rich
gesture, reduced control: the influence of constrained
mappings on performance technique. In Proc. MOCO,
2017.
[11] S. Jord`a. Digital instruments and players: part i –
efficiency and apprenticeship. In Proc. NIME.
National University of Singapore, 2004.
[12] P.-J. Maes, M. Leman, C. Palmer, and M. M.
Wanderley. Action-based effects on music perception.
Frontiers in psychology, 4, 2013.
[13] T. Magnusson. Designing constraints: Composing and
performing with digital musical systems. Computer
Music Journal, 34(4):62–73, 2010.
[14] A. McPherson and V. Zappi. An environment for
submillisecond-latency audio and sensor processing on
beaglebone black. In Proc. AES, 2015.
[15] G. E. McPherson and J. McCormick. Motivational
and self-regulated learning components of musical
practice. Bulletin of the Council for Research in
Music Education, pages 98–102, 1999.
[16] A. Momeni. Caress: An enactive electro-acoustic
percussive instrument for caressing sound. In Proc.
NIME, 2015.
[17] F. R. Moore. The dysfunctions of MIDI. Computer
Music Journal, 12(1):19–28, 1988.
[18] D. M¨
ullensiefen, B. Gingras, J. Musil, and L. Stewart.
The musicality of non-musicians: an index for
assessing musical sophistication in the general
population. PloS one, 9(2):e89642, 2014.
[19] L. Nijs, M. Lesaffre, and M. Leman. The musical
instrument as a natural extension of the musician. In
Proc. Interdisciplinary Musicology, 2009.
[20] L. S. Pardue. Violin Augmentation Techniques for
Learning Assistance. PhD thesis, Queen Mary
University of London, 2017.
[21] D. Schlessinger and J. O. Smith. The Kalichord: A
physically modeled electro-acoustic plucked string
instrument. In Proc. NIME, 2009.
[22] B. Sturm, J. F. Santos, and I. Korshunova. Folk music
style modelling by recurrent neural networks with
long short term memory units. In Proc. ISMIR, 2015.
[23] K. Tuuri, J. Parviainen, and A. Pirhonen. Who
controls who? embodied control within
human–technology choreographies. Interacting with
Computers, pages 1–18, 2017.
[24] D. Wessel. An enactive approach to computer music
performance. Le Feedback dans la Creation Musical,
pages 93–98, 2006.
[25] D. Wessel and M. Wright. Problems and Prospects for
Intimate Musical Control of Computers. Computer
Music Journal, 26(3):11–14, 2002.
... Andersen and Ward highlight how this experimentation relates to current state-of-the-art developments in Tangible, Embedded and Embodied Interfaces (TEI), and in some cases surpasses it. Shared research between NIME and TEI is increasingly happening, and the research presented in this thesis is representative of such a cross-over with elements of this thesis being presented at TEI 2016 [155], TEI 2017 [156], NIME 2016 [239] and NIME 2018 [159], and part of the work this thesis aims to do is to bridge discourses in both fields. ...
... This chapter incorporates significant material from 'Democratising DMIs: the Relationship of Expertise and Control Intimacy' by Jack, Harrison, Morreale, and McPherson, originally published in the proceedings of the International Conference on New Interfaces for Musical Expression, NIME 2018 [159] and 'When is a Guitar not a Guitar? Cultural Form, Input Modality and Expertise' by Harrison, Jack, Morreale, and McPherson, also published in the proceedings of NIME 2018 [130]. ...
... While each of the studies presented in this research represent a defined and at times very field-specific contribution (to music psychology [158], tangible computing [155], and DMI design [157,159]), they also each open up territory for future research. Using similar experimental techniques as those introduced in Chapter 5 it would be possible to conduct a series of studies on the impact of latency on different instrument types and with different groups of performers. ...
Thesis
Full-text available
The sense of touch plays a fundamental role in musical performance: alongside hearing, it is the primary sensory modality used when interacting with musical instruments. Learning to play a musical instrument is one of the most developed haptic cultural practices, and within acoustic musical practice at large, the importance of touch and its close relationship to virtuosity and expression is well recognised. With digital musical instruments (DMIs) – instruments involving a combination of sensors and a digital sound engine – touch-mediated interaction remains the foremost means of control, but the interfaces of such instruments do not yet engage with the full spectrum of sensorimotor capabilities of a performer. This poses compelling questions for digital instrument design: how does the nuance and richness of physical interaction with an instrument manifest itself in the digital domain? Which design parameters are most important for haptic experience, and how do these parameters affect musical performance? Built around three practical studies which utilise DMIs as technology probes, this thesis addresses these questions from the point of view of design, of empirical musicology, and of tangible computing. In the first study musicians played a DMI with continuous pitch control and vibrotactile feedback in order to understand how dynamic tactile feedback can be implemented and how it influences musician experience and performance. The results suggest that certain vibrotactile feedback conditions can increase musicians’ tuning accuracy, but also disrupt temporal performance. The second study examines the influence of asynchronies between audio and haptic feedback. Two groups of musicians, amateurs and professional percussionists, were tasked with performing on a percussive DMI with variable action-sound latency. Differences between the two groups in terms of temporal accuracy and quality judgements illustrate the complex effects of asynchronous multimodal feedback. In the third study guitar-derivative DMIs with variable levels of control richness were observed with non-musicians and guitarists. The results from this study help clarify the relationship between tangible design factors, sensorimotor expertise and instrument behaviour. This thesis introduces a descriptive model of performer-instrument interaction, the projection model, which unites the design investigations from each study and provides a series of reflections and suggestions on the role of touch in DMI design.
... Therefore, with this study we aimed to provide a tool supporting the democratization of the access to collaborative music making, thus far a scarcely addressed topic in Internet of Musical Things research [27]. Notably, in designing our system we took into account the results reported in [15], where authors found that increasing too much the richness of sound control of a digital musical instrument may result in an instrument less enjoyable to play by non-musicians. ...
... It was deemed by such participants to be effectively capable of promoting music playing for non-musicians. Given the adopted design approach for the sound control, the results reported in this study seem to confirm the findings reported in [15]. Such findings suggested that a digital musical instrument requiring less physical and mental effort from the player can lead to more enjoyable experiences for non-musicians even if it has a more restricted space of possibilities. ...
Conference Paper
Full-text available
To date, scarce research has been conducted on the development of tools capable of fostering the democratization of the access to collaborative music making over the network. This paper describes a system based on interconnected air instruments conceived for introducing musically untrained people to collaborative music playing. The system consists of an application controlling synthesizers via real-time finger tracking on input from a consumer-grade camera, which is used in conjunction with a basic networking music performance system communicating control messages. Moving fingers in the air is one of the simplest movements that everybody can afford, thus it was selected as an optimal candidate to build a musical instrument accessible to everybody. A user study was conducted to assess the experience in interacting with the system, involving ten pairs of participants with no musical expertise. Overall, results showed that participants deemed that the system was effective in providing a high user experience, adequate to enable non-musicians to play together at a distance. Moreover, the system was judged as capable of promoting music playing for non-musicians thus fostering easiness of access to music making in a collaborative fashion. A critical reflection on the results is provided, along with a discussion of the study limitations and possible future works.
... Jack et. al. used a guitar-derived DMI to examine relationships of expertise to "control intimacy," or the richness (and difficulty of precise control) of an instrument's mapping, finding that guitarists preferred the richer mapping while non-musicians preferred lower richness, which produced more consistent results [5]. Tsiros and Palladini, in examining expert use of an AI-assisted music production system, noted skepticism in willingness to adopt the assistive technology, and emphasized the importance of a balance between automation and user control [6]. ...
... Initially, we considered two hypotheses motivated by the previously-cited work. The first dealt with the roles of experience [5] [6], namely that more experienced synthesists would find timbre-space control less useful overall than novices. However, preferences indicated in survey responses did not produce any significant results, and the interviews made it apparent that our participants preferred timbre-space control in various situations for various reasons, with no clear relationship to expertise. ...
... The first generation of Strummi instruments were presented in [8] and [11], focusing on the role of global form vs. interaction modality, and richness of interaciton, respectively. The early stages of a later research project involving more recent variations of Strummi are detailed in [7]. ...
... The initial study and its results are discussed in [8,11]. Inspired by the research probe methodologies as discussed in 2.1.1, ...
Conference Paper
Full-text available
In the field of human computer interaction (HCI) the limitations of prototypes as the primary artefact used in research have been noted. Prototypes often remain open in their design , are partially-finished, and have a focus on a specific aspect of interaction. Previous authors have proposed 're-search products' as a specific category of artefact distinct from both research prototypes and commercial products. The characteristics of research products are their holistic completeness as a design artefact, their situatedness in a specific cultural context, and the fact that they are evaluated for what they are, not what they will become. This paper discusses the ways in which many instruments created within the context of New Interfaces for Musical Expression (NIME), including those that are used in performances, often fall into the category of prototype. We shall discuss why research products might be a useful framing for NIME research. Research products shall be weighed up against some of the main themes of NIME research: technological innovation ; musical expression; instrumentality. We conclude this paper with a case study of Strummi, a digital musical instrument which we frame as research product.
... The dependency on light as the primary medium for DMI interaction poses unique challenges to the performer and, in some ways, requires a cognitive readjustment with respect to what can be considered technically idiomatic. At the same time, the established mappings for «Umbra» allowed for a relatively low 'entry fee', which is one of the features generally sought after in DMI design as a means of democratizing performer expertise (Jack et al., 2018). For instance, similar to a piano or keyboard instrument, it is difficult to produce a sound drastically different from what it is supposed to produce, especially when the established mappings have a lower tolerance in sound variability. ...
Article
This paper discusses the strategies, considerations, and implications of designing and performing with a light-dependent digital musical interface (DMI), named light.void~. This interface is introduced as a replica of light thing, an existing DMI designed and popularized by British artist Leafcutter John. The rationale for reproducing this DMI is presented, followed by a discussion around the guiding criteria for establishing data-to-sound mappings, and the kind of affordances that these decisions may bring — including performer control, unpredictability, intentionality, spontaneity, action-sound reactivity, visual interest, and so on. The remainder of the paper focuses on dissecting the nature of this digital musical instrument, using contributions by DMI researchers Miranda and Wanderley as the main analytical framework. The outcome of this process is a semi-improvisational work titled «Umbra», along with the open source documentation for the light.void~ interface. Additionally, some relevant questions emerge with regards to performer expertise, observed vs. unobserved performance, as well as ontological frictions between instrument, composer, performer, designer, and audience.
... Students in academic programs are privy to the tutelage of their courses and teachers to guide them through the digital music instrument (DMI) development process and some global community workshops are available to those outside of academia. In either case, the near-unlimited potential to build complex sound generators with dense user interfaces in these platforms can lock ensembles comprised of novice performers into a pattern of spending their rehearsal time focusing on learning the requirements for making music with a particular tool [3]. ...
... Robert Jack et al., affirm that learning an instrument involves internalizing how action translates to sound, which is initially acquired by exploring and manipulating the instrument with somewhat arbitrary actions that lead to unexpected results [16]. ...
Conference Paper
Full-text available
Recent years have witnessed the appearance of many new digital musical instruments (DMIs) and other interfaces for musical expression (NIME). This paper highlights a well-established music educational background theory that we believe may help DMI developers and users better understand DMIs in the context of music cognition and education. From an epistemological perspective, we present the paradigm of enactive music cognition related to improvisation in the context of the skills and needs of 21st century music learners. We hope this can lead to a deeper insertion of DMIs into music education, as well as to new DMIs to be ideated, prototyped and developed within these concepts and theories in mind. We specifically address the theory generally known as the 4E`s model of cognition ( embodied, embedded, extended and enactive,) within DMIs. The concept of autopoiesis is also described. Finally, we present some concrete cases of DMIs and NIMEs, and we describe how the experience of musical improvisation with them may be seen through the prism of such theories.
... The second and third authors saw potential in these early prototypes to support our research interests: the nature of touch and control intimacy in DMIs, and the role of interaction technique and cultural form in establishing an instrument's identity. These research goals lead to the first stable designs of Strummi, which are presented in [16] and [17]. The materials and aesthetic qualities of the instrument were informed by our research goal of creating different versions of the instrument with more-or-less "guitarlike" qualities. ...
Conference Paper
Full-text available
The nature of digital musical instruments (DMIs), often bespoke artefacts devised by single or small groups of technologists, requires thought about how they are shared and archived so that others can replicate or adapt designs. The ability for replication contributes to an instrument’s longevity and creates opportunities for both DMI designers and researchers. Research papers often omit necessary knowledge for replicating research artefacts, but we argue that mitigating this situation is not just about including design materials and documentation. Our way of approaching this issue is by drawing on an age-old method as a way of disseminating knowledge, the apprenticeship. We propose the DMI apprenticeship as a way of exploring the procedural obstacles of replicating DMIs, while highlighting for both apprentice and designer the elements of knowledge that are a challenge to communicate in conventional documentation. Our own engagement with the DMI apprenticeship led to successfully replicating an instrument, Strummi. Framing this process as an apprenticeship highlighted the non-obvious areas of the documentation and manufacturing process that are crucial in the successful replication of a DMI.
... One of the strands of research in digital musical instruments design is the one that has prioritized self-containedness, leveraging platforms for embedded audio that are typically open source hardware and software projects (see e.g., [9,12,13]. In the past decade various Linux-based platforms for creating self-contained musical instruments [1] have been developed, which mainly target the makers community [18]. ...
Cover Page
Full-text available
The design of traditional musical instruments is a process of incremental refinement over many centuries of innovation. As a result, the shape and form of instruments are well established and recognised across cultures. Conversely, digital musical instruments (DMIs), being unconstrained by requirements of efficient acoustic sound production and er-gonomics, can take on forms which are more abstract in their relation to the mechanism of control and sound production. In this paper we consider the case of designing DMIs that resemble traditional instruments, and pose questions around the social and technical acceptability of certain design choices relating to physical form and input modality (sensing strategy and the input gestures that it affords). We designed four guitar-derivative DMIs to be suitable for performing strummed harmonic accompaniments to a folk tune. Each instrument possesses a combination of one of two global forms (guitar-like body and a smaller tabletop enclosure) and one of two control mechanisms (physical strings and touch sensors). We conducted a study where both non-musicians and guitarists played two versions of the instruments and completed musical tasks with each instrument. This study highlights the complex interaction between global form and input modality when designing for existing musical cultures and varying levels of expertise.
Conference Paper
Full-text available
This paper presents an observational study of the interaction of professional percussionists with a simplified hand percussion instrument. We reflect on how the sound-producing gestural language of the percussionists developed over the course of an hour session, focusing on the elements of their gestural vocabulary that remained in place at the end of the session, and on those that ceased to be used. From these observations we propose a model of movement-based digital musical instruments as a projection downwards from a multidimensional body language to a reduced set of sonic features or behaviours. Many factors of an instrument's design, above and beyond the mapping of sensor degrees of freedom to dimensions of control, condition the way this projection downwards happens. We argue that there exists a world of richness of gesture beyond that which the sensors capture, but which can be implicitly captured by the design of the instrument through its physicality, constituent materials and form. We provide a case study of this model in action.
Article
Full-text available
In this paper, we explore issues of embodied control that relate to current and future technologies in which body movements function as an instrument of control. Instead of just seeing ourselves in control, it is time to consider how these technologies actually control our moving bodies and transform our lived spaces. By shifting the focus from devices to choreographies among devices, we perform a theoretical analysis of the multidimensional aspects that reside within embodied interaction with technology. We suggest that it is beneficial to acknowledge and reformulate the phenomena of embodied control that go beyond the instrumental user-to-device control scheme. Drawing upon the phenomenology of the body, ecological psychology and embodied cognitive science, we identify three different dimensions of embodied control: instrumental, experiential and infrastructural. Design implications of this theoretical model are also discussed.
Research
Full-text available
ISME Research Commission Published in Council for Research in Music Education, 1999
Article
Full-text available
Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.
Article
Full-text available
The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance.
Article
Full-text available
The Musical Instrument Digital Interface (MIDI) is now a de facto standard for the digital representation of musical events. Actions of live musical performers are being 'MIDIfied', commercial software is being based on MIDI-gated conceptions, and digital synthesizers are being slaved to MIDI masters. MIDI-based hardware and software is also proliferating at a tremendous rate, virtually ensuring that the characteristics of MIDI will play an important role in shaping a significant portion of future music. This paper concentrates on known dysfunctions of MIDI from a purely musical point of view, paying particular attention to performance capture, the digital representation of musical control processes, and synthesizer control.
Article
This paper presents a new environment for ultra-low-latency processing of audio and sensor data on embedded hardware. The platform, which is targeted at digital musical instruments and audio effects, is based on the low-cost BeagleBone Black single-board computer. A custom expansion board features stereo audio and 8 channels each of 16-bit ADC and 16-bit DAC for sensors and actuators. In contrast to typical embedded Linux approaches, the platform uses the Xenomai real-time kernel extensions to achieve latency as low as 80 microseconds, making the platform suitable for the most demanding of low-latency audio tasks. The paper presents the hardware, software, evaluation and applications of the system.
Article
We present the Kalichord: a small, handheld electro/acoustic instrument in which the player's right hand plucks virtual strings while his left hand uses buttons to play independent bass lines. The Kalichord uses the analog signal from plucked acoustic tines to excite a physical string model, allowing a nuanced and intuitive plucking experience. First, we catalog instruments related to the Kalichord. Then we examine the use of analog signals to excite a physical string model and discuss the expressiveness and form factors that this technique affords. We then describe the overall construction of the Kalichord and possible playing styles, and finally we consider ways we hope to improve upon the current prototype.