Conference PaperPDF Available

The Hyper-Hurdy-Gurdy.

Authors:

Abstract and Figures

This paper describes the concept, design, implementation, and evaluation of the Hyper-Hurdy-Gurdy, which is the augmentation of the conventional hurdy-gurdy musical instrument. The augmentation consists of the enhancement of the instrument with different types of sensors and microphones, as well as of novel types of real-time control of digital effects during the performer's act of playing. The placing of the added technology is not a hindrance to the acoustic use of the instrument and is conveniently located. Audio and sensors data processing is accomplished by an application coded in Max/MSP and running on an external computer. Such an application also allows the use of the instrument as a controller for digital audio workstations. On the one hand, the rationale behind the development of the instrument was to provide electro-acoustic hurdy-gurdy performers with an interface able to achieve radically novel types of musical expression without disrupting the natural interaction with the traditional instrument. On the other hand, this research aimed to enable composers with a new instrument capable of allowing them to explore novel pathways for musical creation.
Content may be subject to copyright.
The Hyper-Hurdy-Gurdy
Luca Turchet
Department of Automatic Control
KTH Royal Institute of Technology
turchet@kth.se
ABSTRACT
This paper describes the concept, design, implementation,
and evaluation of the Hyper-Hurdy-Gurdy, which is the
augmentation of the conventional hurdy-gurdy musical in-
strument. The augmentation consists of the enhancement
of the instrument with different types of sensors and mi-
crophones, as well as of novel types of real-time control of
digital effects during the performer’s act of playing. The
placing of the added technology is not a hindrance to the
acoustic use of the instrument and is conveniently located.
Audio and sensors data processing is accomplished by an
application coded in Max/MSP and running on an external
computer. Such an application also allows the use of the
instrument as a controller for digital audio workstations.
On the one hand, the rationale behind the development
of the instrument was to provide electro-acoustic hurdy-
gurdy performers with an interface able to achieve radi-
cally novel types of musical expression without disrupting
the natural interaction with the traditional instrument. On
the other hand, this research aimed to enable composers
with a new instrument capable of allowing them to explore
novel pathways for musical creation.
1. INTRODUCTION
During last years numerous exemplars of the so-called “hy-
per instruments” or “augmented instruments” have been
developed [1, 2]. These are conventional acoustic instru-
ments enhanced with sensor and/or actuator technology,
and digital signal processing techniques, which serve the
purpose of extending the sonic capabilities offered by the
instrument in its original version. The performer acting
on the sensors can control the production of the electroni-
cally generated sounds that complement, or modulate, the
sounds acoustically generated by the instrument. The at-
tention of builders of such instruments has focused on the
augmentation of various types of acoustic instruments (e.g.,
violin [3–5], cello [6], saxophone [7], flute [8], trumpet [9],
or guitar [10–12] and piano [13]), including the traditional
ones (e.g., the uilleann pipes [14], the sitar [15], the Ti-
betan singing bowl [16], the zampogna [17], or the great
highland bagpipe [18]).
Copyright: c
2016 Luca Turchet et al. This is an open-access article distributed
under the terms of the Creative Commons Attribution 3.0 Unported License, which
permits unrestricted use, distribution, and reproduction in any medium, provided
the original author and source are credited.
This paper presents the augmentation of an instrument
typical of the musical tradition of many European coun-
tries: the hurdy-gurdy. To the author’s best knowledge,
prior to this work such a challenge was not faced yet. The
hurdy-gurdy is one of the few instruments that can boast
not only centuries of history, but also a tradition uninter-
rupted from Middle Age. When the hurdy-gurdy was born
presumably in the Middle Age (its ancestor was called “or-
ganistrum”) it was certainly one of the musical instruments
most advanced of that poque from the technological stand-
point. During the course of the history the instrument was
subjected to several technical improvements [19]. In par-
ticular, in the last decades, innovative instrument makers
have made many improvements and additions to the instru-
ment in response to the needs of the hurdy-gurdy players
wishing to overcome the technical limitations of the tradi-
tional instrument and to extend its sonic possibilities. More
strings as well as systems to easily change their intonation
were added, so the performer could play in a wider num-
ber of tonalities compared to that offered by the traditional
version of the instrument. Furthermore, the instrument was
enhanced with microphones and entered in the realm of the
electro-acoustic instruments.
The author’s artistic reflection on the development of an
augmented hurdy-gurdy started from these considerations
on the history of the instrument and aimed at continuing
such a developmental path. The main objective of this
research project was to provide the hurdy-gurdy with ad-
ditional possibilities to allow novel musical expressions,
while at the same time avoiding the disruption of the natu-
ral interaction occurring between the player and the instru-
ment.
In Section 2 a brief description of the hurdy-gurdy is pro-
vided to render this paper more intelligible to those unfa-
miliar with the instrument.
2. HURDY-GURDY DESCRIPTION
The hurdy-gurdy (see Figure 1) is a stringed musical in-
strument whose sound is produced by turning a crank that
controls a wheel rubbing against the strings. Such a wheel
is covered with rosin and functions much like a continu-
ous violin bow. The vibration of the strings is made au-
dible thanks to a soundboard. Melodies are played on a
keyboard that presses small wedges against one or more
strings (called “chanterelles”) to change their pitch. More-
over, hurdy-gurdies have multiple “drone string”, which
provide a constant pitch accompaniment to the melody.
Each of the strings can be easily put on or removed from
drones
chanterelles
trompetteschien
sympathetic strings
S4
S1
S2
S3
Acc
keyboard box
headstock
Figure 1. An exemplar of electro-acoustic hurdy-gurdy
with the indications of its main components and the iden-
tified sensors positions.
the contact with the wheel.
Hurdy-gurdies are able to provide percussive sounds pro-
duced by means of one or more buzzing bridges. These are
called “chiens”, act like a sort of hammer having a tail and
a free end, and are placed under one or more drone strings
called “trompettes”. The tail of such chiens is inserted into
a narrow vertical slot that holds them in place, while their
free end rest on the soundboard and is more or less free
to vibrate. It is precisely the vibration of the free ends of
the chiens that produce the unmistakable percussive sound
of the instrument: when the wheel is turned slowly the
pressure on the trompettes strings holds the chien in place,
sounding a drone, while when the crank is accelerated, the
hammer lifts up and vibrates against the soundboard pro-
ducing the characteristic buzzing noise. Such a buzz is
used as an articulation or to provide rhythmic percussive
effects.
Recently a new model of electro-acoustic hurdy-gurdy
has been crafted by the luthier Wolfgang Weichselbaumer1
(see Figure 1). One of the many novelties lies in the com-
plex system of six embedded microphones placed in as
many parts of the instrument in order to track the sound of
each component: one piezo-electric microphone is placed
under the buzzing noise bridge capable of detecting mainly
the contribution to the instrument sound given by the trom-
pettes strings; one piezo-electric microphone is placed un-
der the wooden part where the drones were positioned,
capable of tracking mainly their contribution; one piezo-
electric microphone is placed under the bridge of the chan-
terelles positioned capable of tracking mainly their con-
tribution; two one piezo-electric microphones are placed
in correspondence of the two sets of sympathetic strings,
capable of tracking mainly their contribution; one omni-
directional small microphone placed near the chanterelles
bridge, capable of tracking the overall acoustic sound of
the instrument. Each of the five present microphones is
able to track with high precision the richness of the sound
of each component. Such a microphone system is at the
basis of the augmentation presented in this paper.
1http://www.weichselbaumer.cc/
3. MAIN CONCEPTS
The first step to satisfy the goal of augmenting the hurdy-
gurdy to achieve a novel interface for musical expression,
capable to open radically new paths for composition and
performance, consisted in determining the needs and con-
ditions to meet for the new instrument. This research star-
ted by the author’s questioning about his personal needs, as
a performer, of extending the sonic possibilities of the in-
strument and overcoming its limitations when used in con-
junction with the most widespread current technologies for
sound processing. Such need resulted in the following re-
quirements.
The first requirement consisted of enhancing the instru-
ment without physically modifying it with holes, carvings
or attaching new pieces of wood for instance: the tech-
nology should have been easy to put on and remove, and
the instrument could have been still played in the normal
acoustic way, if wanted.
The second requirement was to augment the instrument in
such a way that the conventional set of gestures to play the
instrument would remain unaltered: the instrument should
have kept working in the conventional way after the aug-
mentation. For this purpose, the way of playing the instru-
ment was analyzed in order to identify the possible set of
new gestures that a performer would act on the instrument
without interfering with the natural act of playing. The
right hand appeared immediately the most difficult to act
on. This was due to the complexity of tracking the quick
and subtle movements (especially small variations in ac-
celeration) of the wheel, wrist and fingers while turning
the crank. A solution was attempted by placing some ac-
celerometers attached to the wrist, but the tracking resulted
not to be optimal due to accuracy and latency issues. A
possible solution to track the wheel would have been that
of using magnets inserted into it and leveraging the so-
called “hall effect”. However, these solutions would have
required the performer to wear some sensors (e.g., wireless
bracelets, or wireless boards with embedded accelerome-
ters), which would have been perceived as obtrusive, or to
groove some carvings into the wheel to put magnets and
cope with the problem of having some cumbersome cables
placed on the instrument: this not only would have limited
the ease of playing and even of moving the instrument, but
also would have affected the robustness of the added tech-
nology. For these reasons, the research was focused on the
tracking of the left hand gestures and of the orientation of
the instrument.
The third requirement consisted of limiting as much as
possible the unwanted interactions of the performer with
the technology added to the instrument different from the
sensors. This resulted in reducing at the minimum the
amount and the length of the involved wires, and to hide
as much as possible the technology inside the instrument
as well as by adopting wireless solutions.
The fourth requirement was to allow hurdy-gurdy per-
formers to achieve unprecedented sound modulations. In
first place, this consisted of enabling the possibility of ex-
erting a strict control of a sound effect at note level. Indeed,
by means of current technologies a hurdy-gurdy performer
can use an effect (e.g., a delay) to control the sound modu-
lation of a whole musical sentence, but can not apply that
particular effect on a single note of the musical sentence
and keep the other notes unaffected by that effect. In sec-
ond place, the augmentation had to provide the possibility
to modulate separately the sound produced by the various
components of the instrument (see Section 2). This could
be only possible by involving a set of microphones and a
palette of signal processing algorithms capable of detecting
and isolating such components. In third place, performers
had to be able to avail themselves of sound effects specif-
ically built for the various components of the instrument
that could allow to transcend the physical limitations of the
instrument itself. For instance, smooth and long glissando
and bending are not possible on the traditional instrument.
Analogously, the frequency of a drone could be modulated
to add some vibrato (thing not possible on the conventional
instrument since the drones are not pressed by the fingers)
or the sound of a single chanterelle could be transformed
into a bi-chord.
4. DESIGN
4.1 New gestures identification
The design process started with the identification of a new
possible set of gestures that could be reasonably added to
the normal playing technique without disrupting it. The
most important of these are the following:
while playing the chanterelles by means of the fin-
gers acting on the keys of the keyboard, the thumb is
normally free and can be exploited to press an area
of the keyboard or slide upon it;
the pinkie can be used to press a key, the index to
press an area of the keyboard, and the thumb to press
another another area of the instrument placed at a
even larger distances from the keyboard;
when the fingers are not involved in acting on the
keys (e.g., when chanterelles are used to produce
their sound as open strings, or when sympathetic
strings are plucked) the left hand is totally free and
different fingers could press/slide on various areas of
the instrument even very far from the keyboard;
all these new added gestures, as well as the ones
of the conventional playing technique, can be per-
formed simultaneously with tilting up and down or
forth and back the whole instrument.
4.2 Hardware technology identification an placement
The technology involved in the augmentation (additional
to the set of embedded microphones already present) was
designed to consist of sensors used to track the set of new
gestures and a microcontroller board for the digital conver-
sion of the sensors analog values. Three types of sensors
could be involved:
pressure sensors, to track pressure of the fingers on
an area of the instrument;
ribbon sensors, to track the position of the fingers on
an area of the instrument;
accelerometers to track the tilting of the instrument.
A first design choice was that pressure and ribbon sen-
sors had to cover relatively wide areas in order to achieve
an optimal accessibility. The use of strip-shaped sensors of
various lengths was considered the optimal choice for this
purpose. A second design choice was to place a ribbon sen-
sor on top of a pressure sensor in order to detect simultane-
ously the information about the pressure force exerted by
the finger as well as its position on a certain part of the in-
strument. The microcontroller board was designed to be as
small as possible in order to be placed easily on the instru-
ment, and to have wireless connectivity in order to avoid
the use of a cable connecting it to an external computation
unit responsible for processing both the microphones and
the sensors signals.
The number and placement of the identified sensors and
microcontroller board represented a challenging problem
due to the complexity of the shape of the hurdy-gurdy, the
hardware limitations of the sensors themselves, and the set
requirement of keeping unaltered the natural interaction
of the player with the instrument. Four pairs of pressure-
ribbon sensors and one 3-axis accelerometer were chosen.
The four pairs of sensors were placed on top of the key-
board box (see “S1” in Fig. 1); at the side of the keyboard
box (see “S2” in Fig. 1); on the top of the headstock (see
“S3” in Fig. 1); on the bottom of the headstock (see “S4”
in Fig. 1). These positions were chosen for their easiness
in reachability with the fingers and because they did not
interfere neither with the normal way of playing nor with
the functioning of the various components of the instru-
ment. The best position to place the accelerometers was
identified to be on the interior part of the headstock (see
Acc” in Fig. 1), since it did not interfere with the place-
ment of the other sensors and could easily be attached to
the instrument. The best position for the microcontroller
board was also identified as the space behind the head-
stock. This choice was motivated by the fact that the wires
coming out from the sensors could reach the board easily,
with the shortest distance, and without interfering with the
functioning of the various components of the instrument.
In addition, in that position the board was hidden from the
sight and above all it could be naturally protected from un-
wanted collisions.
4.3 Mapping strategies
The design for the interactive control of the developed in-
strument was based both on the extraction of features from
the data captured by sensors and from the acoustic wave-
forms captured by microphones. A set of mapping strate-
gies between the performers gestures and the sound pro-
duction was investigated. It was important to define map-
pings that were intuitive to the performer and that took into
account electronic, acoustic, ergonomic and cognitive lim-
itations. In order to decide on a particular setup, many
questions needed to be answered, such as for instance how
many parameters of a sound effect the performer could be
able to simultaneously control, or how long a performer
would need to practice to become comfortable with a par-
ticular setup.
The hurdy-gurdy is an instrument with an intrinsic high
level of affordances as far as the features suitable for the
control of the digital sound production are concerned. It
can be used as a percussive, melodic and accompanying
instrument, and from all of these characteristics it is pos-
sible to find a variety of potential controls by extracting
acoustic features from the sound captured by the micro-
phones. These controls can be used in conjunction with
those resulting from the interaction with sensors.
The first step in the mappings design process consisted of
defining associations of each pair of sensors to a compo-
nent of the instrument. Sensors placed in positions S1, S2,
S3, and S4 indicated in Fig. 1 were mainly used to con-
trol the sound captured by the microphones of the chan-
terelles, trompettes, sympathetic strings, and drones re-
spectively. Nevertheless, such associations could change
in such a way that the same pair of sensors could control
more than one instrument component, or, vice versa, more
than one pair of sensors could control a single instruments
component. The second step consisted of the definition
of the mappings between the performers gestures acted on
the sensors and the parameters of the selected algorithms
for the various sound effects. These mappings were care-
fully designed to allow a good integration of both acoustic
and electronic components of the performance, resulting
in an electronically-augmented acoustic instrument that is
respectful of the hurdy-gurdy tradition.
5. IMPLEMENTATION
5.1 Hardware
The designed augmentation was achieved at hardware level
by involving the pressure sensors FSR 408 Strip Force Sens-
ing Resistor 2manufactured by Interlink Electronics, the
ribbon sensors Soft Pot 3manufactured by Spectra Sym-
bol, and the microcontroller board x-OSC 4manufactured
by x-io Technologies Limited.
In each of the four ribbon-pressure sensor pairs, the rib-
bon sensor was attached, thanks to its adhesive film, on top
of the pressure sensor in order to create a unique device
capable of providing simultaneous information about po-
sition and pressure of the finger interacting with it. The
pressure sensor was in turn attached, thanks to its adhesive
film, to a plastic rigid support, which was appropriately cut
in order to meet the size of the sensors. This support was
involved for two reasons. The first one was that placing
the sensors directly on the instrument did not allow an op-
timal tracking of the forces and positions exerted by the fin-
gers on the sensors due to the fact that in some cases (e.g.,
the keyboard box) the wood could slightly move up and
down, and a more rigid, homogenous, and stable base was
needed. The second one was that thanks to the support the
created device could be easily attached or removed to the
2http://www.interlinkelectronics.com/FSR408.php
3http://www.spectrasymbol.com/potentiometer/softpot
4http://www.x-io.co.uk/products/x- osc/
Figure 2. The developed Hyper-Hurdy-Gurdy.
Figure 3. The placement of the wireless microcontroller
board on the instrument.
instrument. In order to avoid ruining the wooden parts of
the acoustic instrument, a specific low-impact scotch tape
strip was placed on the part of the instrument where the
plastic support was attached.
The x-OSC board was selected for its features: small size,
on-board sensors (including a 3-axis accelerometers), and
wireless transmission of sensors data over WiFi, with a
low latency (i.e., 3ms [20]) and via Open Sound Control
messages 5. Figures 2 and 3 illustrate the position of the
sensors and microcontroller board in the developed instru-
ment.
5.2 Software
As far as the software is concerned, the Max/MSP 6sound
synthesis and multimedia platform was utilized. An ap-
plication was coded to implement the designed sound ef-
fects and mappings, by analyzing and processing both the
sounds detected from the microphones embedded in the in-
strument and the data gathered from the sensors.
The first issue encountered was that the microphones were
not effective in detecting separately each of the compo-
nents of the instrument. For instance, the sound produced
by the drones was in part detected by the microphones of
5http://www.opensoundcontrol.org/
6http://www.cycling74.com/
the chanterelles; similarly, the microphone of the trompet-
tes detected also the sound of the chanterelles. A complete
isolation of such components is not possible in an acous-
tic instrument such as the hurdy-gurdy since the vibrations
produced by one component propagate everywhere in the
instrument and are detected by contact microphones or ex-
ternal microphones placed in a whatever part of the instru-
ment. Therefore, some signal processing techniques were
needed to achieve the goal of isolating as much as possible
the sound of each component in order to process it sepa-
rately. For instance, a low pass filter was applied to the
input signal coming from the microphone of the drones in
order to limit the amount of signal resulting from playing
the chanterelles. Vice versa, a high pass filter was applied
to the signal coming from the contact microphone placed
on the chanterelles bridge to limit both the low frequencies
produced by the drones and of the noise of resulting from
pressing the keys. Ad hoc signal processing algorithms
were also implemented for analyzing the captured acoustic
waveforms in order to achieve particular sound effects. For
example, to extract only the buzzing noise component from
the sound produced by the trompettes, a signal gate was in-
volved which was activated according to a threshold set on
the sound amplitude. The specific research challenge in us-
ing all the algorithms for processing the captured acoustic
waveforms was that of finding the best combination of the
algorithms parameters in order to achieve the best result.
Furthermore, in presence of the hits on the crank made in
order to produce the buzzing noises, the resulting impul-
sive variation in the acceleration tracked by the accelerom-
eters needed to be excluded. To solve such issues, vari-
ous mean filters, median filters, and low pass filters, were
applied. These processing techniques were effective in
smoothing the rapid variations happening in the signal.
However, their application had the side effect of introduc-
ing latency. Therefore, a large amount of research con-
sisted in finding the right values for the parameters of such
filters in order to achieve the best tradeoff between the ac-
curacy in tracking and the latency of the response produced
by the filters.
Once a good tracking of both performer’s gestures and
instrument components sounds was achieved, several map-
pings were implemented. Examples of these are the fol-
lowing 7:
The amount of volume of a sound effect was map-
ped to the amount of pressure exerted by a finger on
a pressure sensor, such that when the sensor was not
pressed the effect was not activated, and when it was
pressed the presence of the effect could be modu-
lated individually for each note.
The sliding of the finger on a ribbon sensor was map-
ped on the amount of frequency transposition in a
pitch shifting algorithm such as the glissando effect
could be produced.
The combination of the use of both the pressure and
7A comprehensive list of audio-visual examples of the implemented
mappings is available at:
https://www.youtube.com/watch?v=9c1QFg2bG9w
ribbon sensors for the previous two mappings re-
sulted on a glissando effect whose activation depen-
ded on the presence of the finger on the sensor, the
frequency transposition depended on the finger po-
sition, and the volume depended on the amount ex-
erted pressure force.
The amount of up-down or back-forth tilting move-
ments tracked by accelerometers was mapped to the
activation of an effect: when the amount of tilting
overcame a certain threshold the effect was activated.
This way of using the tilting as a switch for an effect
rather than a continuous control was due to the fact
that great displacements from the normal position of
the instrument could be tracked in a easier way and
were subjected to less variations. Indeed the rapid
and strong movements produced while playing the
hurdy-gurdy with the buzzing noise of the trompet-
tes could lead to impulsive variations in the signal
acquired by the accelerometers, and this could not
adapt well for a continuous control usage.
Moreover, a variety of mappings were defined on the ba-
sis of algorithms used to spatialize virtual sound sources
along bi-dimensional and tri-dimensional trajectories in
presence of multichannel surround sound systems. For this
purpose, the facilities offered by the “Ambisonic Tools for
Max/MSP” [21] were used.
Finally, additional mappings were implemented to con-
trol various sound effects, synthesizers, loops, and virtual
instruments available on the Logic Pro X 8and Ableton
Live 9digital audio workstations. For this purpose,
Max/MSP applications as well as Max for Live devices
were implemented, in which the sensors data where pro-
cessed and converted into MIDI messages.
6. EVALUATION
The developed instrument was subjected to extensive tests
aimed to validate the implemented augmentation from the
technological and expressive standpoints. In addition to
the author’s own evaluation, the Hyper-Hurdy-Gurdy was
tested by Johannes Geworkian Hellman 10 , a well known
hurdy-gurdy performer and virtuoso. The testing session
was conducted in a acoustically isolated room of the KMH
Royal College of Music of Stockholm and lasted about one
hour. The setup consisted of the developed Hyper-Hurdy-
Gurdy configured to have all sensors mapped to at least one
parameter of a sound effect, a soundcard (Fireface UFX),
two loudspeakers (Genelec 8050B Studio Monitor), and
a laptop (Macbook Pro) running the software applications
described in Section 5.2.
The session consisted of three parts, which took about
10, 35, and 15 minutes respectively. In the first part the
performer was asked to interact with the instrument with-
out receiving any information about the added technology.
This procedure was adopted in order to assess the very first
8http://www.apple.com/logic-pro/
9http://www.ableton.com/
10 http://www.johannesgeworkianhellman.com/
approach with the instrument. During this part, only the
four pair of sensors were explored. The mappings related
to the accelerometers were not detected. This was due to
the fact that the position of the instrument was not tilted
and the accelerometers, differently from the other sensors,
were not visible. The associations between the sensors and
the corresponding controlled components of the instrument
were all identified and understood.
In the second part the various sensors and mappings were
explained, and questions were made regarding the appro-
priateness of the sensors position, intuitiveness of the map-
pings involved, and the effectiveness of the types of sound
effects utilized. The performer reported to have appreci-
ated the fact that the sensors were placed in ergonomic
ways, they were easy to reach while normally playing, and
they did not require too much force to be activated. More-
over, very positive comments were reported about the ef-
fectiveness of all the implemented mappings, in particular
about the appropriateness and accuracy of all the involved
ranges of the parameters. One of the most relevant com-
ments was “With this instrument I can easily apply and
control an effect to each note I produce, so now I can do
things that I could not achieve with the controls for the ef-
fects I normally use.Interestingly, from some comments
it emerged the need of having available some discrete con-
trols in addition to the continuous ones present.
In the third part, the performer was asked to play the in-
strument, taking advantage of the new affordances offered
by the instrument and exploring the novel possibilities for
improvisation. As one would expect a final comment was
“I think one would need a lot of exploration and experi-
ence to learn how to really use these new possibilities”.
Nevertheless, overall, his feedback was very positive and
confirmed the goodness of the author’s design choices.
7. HYPER-HURDY-GURDY IN LIVE
PERFORMANCE
The Hyper-Hurdy-Gurdy has been used for musical cre-
ations and performances purposes. It was premiered at the
Audiorama concert venue in Stockholm in April 2015. A
21-channels composition, named “Incantesimo”, for solo
Hyper-Hurdy-Gurdy was performed. Subsequently, vari-
ous pieces were composed and performed by the author
both as a soloist and in chamber orchestra. Videos do-
cumenting a technical demonstration of the Hyper-Hurdy-
Gurdy and its usage in live performances are available on
the author’s personal website 11 . Those live performances
constitute the final validation of the developed instrument.
8. DISCUSSION AND CONCLUSIONS
On the one hand, the rationale behind the development
of the instrument was to provide hurdy-gurdy performers
with an interface able to achieve novel types for musical
expression without disrupting the natural interaction with
the traditional instrument. On the other hand, this research
aimed to enable composers with a new instrument capable
11 www.lucaturchet.it
of allowing them to explore novel pathways for musical
creation. The proposed research resulted in an augmented
instrument suitable for the use in both live performance,
improvisation, and composition contexts. Novel timbres
and forms of performer-instrument interactions were achi-
eved, which resulted in an enhancement of the conven-
tional electro-acoustic performances as well as in a variety
of new compositional possibilities.
This augmentation of the traditional hurdy-gurdy origi-
nated from the author’s two passions and interests: tra-
ditional instruments and music technology. The develop-
ment of the Hyper-Hurdy-Gurdy and the compositions for
it represent the author’s challenge of combining these two
far worlds. This research was motivated by the author’s
need to investigate new paths for individual musical ex-
pressions as well as to research how to progress the possi-
bilities for music creation with the hurdy-gurdy and elec-
tronics normally associated to it. At the conclusion of the
project, it is the author’s opinion that the developed instru-
ment is effectively capable of responding to such needs.
Undoubtedly, these needs are also shared by many musi-
cians and composers who constantly search for novel tools
and ideas for their artistic works. However, in the au-
thor’s vision, completely novel paths are not practically
possible with the current conventional acoustic and electro-
acoustic hurdy-gurdies, since basically all the expression
possibilities available with them have been already inves-
tigated. With the introduction of a novel generation of
Hyper-Hurdy-Gurdies, the possibilities for absolutely novel
musical research paths are countless, and revolutionary ap-
proaches to composition and improvisation can be explored.
The pieces that the author composed and performed might
be considered as a proof of these statements.
As far as future works are concerned, the author envi-
sions various possibilities for extending the results of this
project. First of all the collaboration with an instrument
maker would be beneficial in order to craft from scratch a
hurdy-gurdy with the sensors embedded in it. Secondly,
different types as well as a larger number of sensors could
be added. In particular a set of small and fully config-
urable buttons and knobs placed onto the instrument would
be useful to change presets of sounds effects and/or map-
pings: this would allow to avoid the use of external tools
dedicated for this purposes such as footpedals. Further-
more, an actuated system could be added in a way similar
to that proposed for the actuated violin presented in [22] or
the smart guitar developed by Mind Music Labs [23].
Finally, it is the author’s hope that the results presented in
this paper could inspire other digital luthiers, performers,
and composers to continue this research on augmenting the
hurdy-gurdy as well as on composing for it.
Acknowledgments
This work is part of the “Augmentation of traditional Ital-
ian instruments” project, which is supported by Fondazione
C.M. Lerici. The author acknowledge the hurdy-gurdy per-
former Johannes Geworkian Hellman for having partici-
pated to the evaluation of the developed instrument.
9. REFERENCES
[1] T. Machover and J. Chung, “Hyperinstruments: Musi-
cally intelligent and interactive performance and cre-
ativity systems,” in Proceedings of the International
Computer Music Conference, 1989.
[2] E. R. Miranda and M. M. Wanderley, New digital mu-
sical instruments: control and interaction beyond the
keyboard. AR Editions, Inc., 2006, vol. 21.
[3] F. Bevilacqua, N. Rasamimanana, E. Fl´
ety, S. Lemou-
ton, and F. Baschet, “The augmented violin project: re-
search, composition and performance report,” in Pro-
ceedings of the International Conference on New In-
terfaces for Musical Expression, 2006, pp. 402–406.
[4] D. Overholt, “Violin-related HCI: A taxonomy elicited
by the musical interface technology design space,” in
Arts and Technology. Springer, 2012, pp. 80–89.
[5] L. S. Pardue, C. Harte, and A. P. McPherson, “A low-
cost real-time tracking system for violin,” Journal of
New Music Research, vol. 44, no. 4, 2015.
[6] A. Freed, D. Wessel, M. Zbyszynsky, and F. Uitti,
Augmenting the cello,” in Proceedings of the Interna-
tional Conference on New Interfaces for Musical Ex-
pression, 2006.
[7] S. Schiesser and C. Traube, “On making and playing an
electronically-augmented saxophone,” in Proceedings
of the International Conference on New Interfaces for
Musical Expression, 2006.
[8] C. Palacio-Quintin, “Eight Years of Practice on the Hy-
perFlute: Technological and Musical Perspectives,” in
Proceedings of the International Conference on New
Interfaces for Musical Expression, 2008.
[9] J. Thibodeau and M. M. Wanderley, “Trumpet augmen-
tation and technological symbiosis,” Computer Music
Journal, vol. 37, no. 3, 2013.
[10] N. Bouillot, M. Wozniewski, Z. Settel, and J. R. Coop-
erstock, “A mobile wireless augmented guitar.” in Pro-
ceedings of the International Conference on New Inter-
faces for Musical Expression, 2008, pp. 189–192.
[11] O. L¨
ahdeoja, M. M. Wanderley, and J. Malloch, “In-
strument augmentation using ancillary gestures for
subtle sonic effects,Proceedings of the Sound and
Music Computing Conference, pp. 327–330, 2009.
[12] O. Lahdeoja, “An augmented guitar with active acous-
tics,” in Proceedings of the Sound and Music Comput-
ing Conference, 2015.
[13] A. McPherson, “Buttons, handles, and keys: Ad-
vances in continuous-control keyboard instruments,
Computer Music Journal, vol. 39, no. 2, pp. 28–46,
2015.
[14] C. Cannon, S. Hughes, and S. ´
O. Modhr´
ain, “Epipe:
exploration of the uilleann pipes as a potential con-
troller for computer-based music,” in Proceedings of
the international conference on New Interfaces for Mu-
sical Expression, 2003, pp. 3–8.
[15] A. Kapur, A. J. Lazier, P. Davidson, R. S. Wilson, and
P. R. Cook, “The electronic sitar controller,” in Pro-
ceedings of the International Conference on New In-
terfaces for Musical Expression, 2004, pp. 7–12.
[16] D. Young and G. Essl, “Hyperpuja: A tibetan singing
bowl controller,” in Proceedings of the International
Conference on New Interfaces for Musical Expression,
2003, pp. 9–14.
[17] L. Turchet, “The Hyper-Zampogna,” in Proceedings of
the Sound and Music Computing Conference, 2016.
[18] D. Menzies and A. McPherson, “An electronic bag-
pipe chanter for automatic recognition of highland pip-
ing ornamentation.” in Proceedings of the international
conference on New Interfaces for Musical Expression,
2012.
[19] S. Palmer and S. Palmer, The hurdy-gurdy. David
and Charles, Brunel House Newton Abbot Devon, UK,
1980.
[20] S. Madgwick and T. Mitchell, “x-osc: A versatile wire-
less i/o device for creative/music applications,” in Pro-
ceedings of Sound and Music Computing Conference,
2013.
[21] J. Schacher and M. Neukom, “Ambisonics spatializa-
tion tools for max/msp,” in Proceedings of the Interna-
tional Computer Music Conference, 2006.
[22] D. Overholt, E. Berdahl, and R. Hamilton, “Advance-
ments in actuated musical instruments,” Organised
Sound, vol. 16, no. 02, pp. 154–165, 2011.
[23] L. Turchet, A. McPherson, and C. Fischione, “Smart
instruments: Towards an ecosystem of interoperable
devices connecting performers and audiences,” in Pro-
ceedings of the Sound and Music Computing Confer-
ence, 2016.
... This, however, entailed a radical rethinking of the instrument and its practice. Certainly the author was facilitated in this process given his previous experience in playing augmented instruments designed by himself (e.g., [15,16]), but the Hyper-Mandolin profoundly changed his rehearsal and performance experience. Indeed, the incorporation of new gestures into the usual playing technique gave rise to a new technique. ...
... It is the author's opinion that part of this success is attributable to the novelty of the instrument, which certainly impressed the audience in first place and resulted in a higher level of attention. Interestingly, such a consideration was basically the same observed during the performances held with other augmented instruments previously developed by the author himself, such as the Hyper-Hurdy-Gurdy [15] and the Hyper-Zampogna [16]. ...
Conference Paper
Full-text available
This paper presents the Hyper-Mandolin, which consists of a conventional acoustic mandolin augmented with different types of sensors, a microphone, as well as real-time control of digital effects and sound generators during the performer's act of playing. The placing of the added technology is conveniently located and is not a hindrance to the acoustic use of the instrument. A modular architecture is involved to connect various sensors interfaces to a central computing unit dedicated to the analog to digital conversion of the sensors data. Such an architecture allows for an easy interchange of the sensors interface layouts. The processing of audio and sensors data is accomplished by applications coded in Max/MSP and running on an external computer. The instrument can also be used as a controller for digital audio workstations. The interactive control of the sonic output is based on the extraction of features from both the data captured by sensors and the acoustic waveforms captured by the microphone. The development of this instrument was mainly motivated by the author's need to extend the sonic and interaction possibility of the acoustic mandolin when used in conjunction with conventional electronics for sound processing.
... However, looking at papers written on traditional instruments augmentation the number is quite low. Examples of augmented traditional instruments are the "electronic sitar controller" [9], the "hyperpuja" [10], or the "hyper-hurdy-gurdy" [11]). To the author's best knowledge no research has been conducted yet on the acoustic augmentation of one of the most typical exemplars of traditional instruments: the bagpipe [12]. ...
... The development of the Hyper-Zampogna offered both technical and artistic challenges that the author enjoyed embracing. Analogously to the augmentation of the hurdygurdy that he proposed in [11], the augmentation of the zampogna originated from his passion and interest in music technology and traditional instruments, and represents his challenge of combining these two far worlds. On the one hand, the rationale behind the development of such augmented instrument was to provide electro-acoustic zampogna performers with an interface capable of achieving novel types of musical expression without disrupting the natural interaction with the traditional instrument. ...
Conference Paper
Full-text available
This paper describes a design for the Hyper-Zampogna, which is the augmentation of the traditional Italian zampogna bagpipe. The augmentation consists of the enhancement of the acoustic instrument with various microphones used to track the sound emission of the various pipes, different types of sensors used to track some of the player's gestures, as well as novel types of real-time control of digital effects. The placing of the added technology is not a hindrance to the acoustic use of the instrument and is conveniently located. Audio and sensors data processing is accomplished by an application coded in Max/MSP and running on an external computer. Such an application also allows for the use of the instrument as a controller for digital audio workstations. On the one hand, the rationale behind the development of such augmented instrument was to provide electro-acoustic zampogna performers with an interface capable of achieving novel types for musical expression without disrupting the natural interaction with the traditional instrument. On the other hand, this research aimed to provide composers with a new instrument enabling the exploration of novel pathways for musical creation.
... An alternative approach is the extension of a musical instrument, making it become the interface for controlling sound processing parameter by mounting IMU devices on the body of the instrument. An application of this method can be found in the work of Luca Turchet who realised the Hyper-Hurdy-Gurdy (Turchet, 2016a) (2012) demonstrated that the Vicon is the system has the best performance. However, as it is more affordable and easier to use, the Kinect is more widely used and has inspired the production of interactive systems to manipulate spatial audio for musical and non-musical works. ...
Thesis
Full-text available
Interfaces for musical expression are widely used for controlling and transforming sound in live performance. They aim to facilitate the interaction with a computer and empower the performer with a more expressive control over the sound. However, the actions made to control them have the potential to interfere with the musical performance, in relation to the instrumental technique, choreographic aspects or the physical characteristics of the played musical instrument. To avoid this issue, modes of interaction and various devices have been designed and utilised in conjunction with interactive audio and visual software to control and transform audiovisual media. In particular, gesture sensing technologies have been successfully used in different musical applications. However, they, in turn, raise questions such as, how can musicians most effectively control and transform auditory, visual and lighting effects during a live performance through gesture? What interaction design considerations should be made that allow performers to interact simultaneously with an instrument and audio-visual-lighting processing? How can disruption during a live performance with embodied human-computer interactions be reduced? The work presented in this thesis investigates modes of interaction with sound, visual projection and lighting effects during a musical performance that may result natural and embodied, and not dependent from a particular musical instrument, its sound or instrumental technique. For this purpose, using a User-Centred Design method, I realised `MyoSpat' upon Music and Human-Computer Interaction principles. MyoSpat is an interactive system, which embeds Inertial Measurement Unit (IMU) and Electromyography (EMG) technology, for gesturally controlling audio and lighting processes during a musical performance. As part of this research, I also created Myo Mapper, a Thalmic Labs' Myo to Open Sound Control (OSC) messages mapper. Outcomes of this research are presented in this thesis and through a portfolio of performances realised in collaboration with musicians.
... In general, both categories of AMIs may be based on acoustic (e.g., a mandolin [49]) or purely electronic instruments (e.g., an electric guitar [21]). Within the former category, instruments at the basis of AMIs might be those that have been mostly involved in classical music (e.g., flute [42], trumpet [46], violin [5], cello [13]) or to traditional music (e.g., hurdy-gurdy [47], bagpipe [32,48]). ...
Conference Paper
Full-text available
Augmented musical instruments (AMIs) consist of the augmentation of conventional instruments by means of sensor or actuator technologies. Smart musical instruments (SMIs) are instruments embedding not only sensor and actuator technology, but also wireless connectivity, on-board processing, and possibly systems delivering electronically produced sounds, haptic stimuli, and visuals. This paper attempts to disambiguate the concept of SMIs from that of AMIs on the basis of existing instances of the two families. We counterpose the features of these two families of musical instruments, the processes to build them (i.e., augmentation and smartification), and the respective supported practices. From the analysis it emerges that SMIs are not a subcategory of AMIs, rather they share some of their features. It is suggested that smartification is a process that encompasses augmentation, as well as that the artistic and pedagogical practices supported by SMIs may extend those offered by AMIs. These comparisons suggest that SMIs have the potential to bring more benefits to musicians and composers than AMIs, but also that they may be much more difficult to create in terms of resources and competences to be involved. Shedding light on these differences is useful to avoid confusing the two families and the respective terms, as well as for organological classifications.
... The addition of sensors to familiar instruments is wellestablished, as is the construction of "instrument-like controllers" [9], which replicate the physical form of a famil-iar instrument to control other sounds. Sensor augmentations exist of nearly every familiar instrument, including violin [10][11][12], trumpet [13], guitar [14][15][16] and piano [17], as well as more rare instruments, such as the hurdy-gurdy [18]. Techniques have been proposed for customisable sensor surfaces adaptable to different applications [19] and toolkits for musicians to create their own augmentations [20]. ...
Conference Paper
Full-text available
This paper proposes a new class of augmented musical instruments, "Smart Instruments", which are characterized by embedded computational intelligence, bidirectional wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. Smart Instruments bring together separate strands of augmented instrument, networked music and Internet of Things technology, offering direct point-to-point communication between each other and other portable sensor-enabled devices, without need for a central mediator such as a laptop. This technological infrastructure enables an ecosystem of interoperable devices connecting performers as well as performers and audiences, which can support new performer-performer and audience-performer interactions. As an example of the Smart Instruments concept, this paper presents the Sensus Smart Guitar, a guitar augmented with sensors, onboard processing and wireless communication.
Conference Paper
Full-text available
This paper describes a design for the Hyper-Zampogna, which is the augmentation of the traditional Italian zampogna bagpipe. The augmentation consists of the enhancement of the acoustic instrument with various microphones used to track the sound emission of the various pipes, different types of sensors used to track some of the player's gestures, as well as novel types of real-time control of digital effects. The placing of the added technology is not a hindrance to the acoustic use of the instrument and is conveniently located. Audio and sensors data processing is accomplished by an application coded in Max/MSP and running on an external computer. Such an application also allows for the use of the instrument as a controller for digital audio workstations. On the one hand, the rationale behind the development of such augmented instrument was to provide electro-acoustic zampogna performers with an interface capable of achieving novel types for musical expression without disrupting the natural interaction with the traditional instrument. On the other hand, this research aimed to provide composers with a new instrument enabling the exploration of novel pathways for musical creation.
Conference Paper
Full-text available
This paper proposes a new class of augmented musical instruments, "Smart Instruments", which are characterized by embedded computational intelligence, bidirectional wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. Smart Instruments bring together separate strands of augmented instrument, networked music and Internet of Things technology, offering direct point-to-point communication between each other and other portable sensor-enabled devices, without need for a central mediator such as a laptop. This technological infrastructure enables an ecosystem of interoperable devices connecting performers as well as performers and audiences, which can support new performer-performer and audience-performer interactions. As an example of the Smart Instruments concept, this paper presents the Sensus Smart Guitar, a guitar augmented with sensors, onboard processing and wireless communication.
Article
Full-text available
This paper presents two low-cost, real-time methods for performance tracking on the violin. Low-latency pitch detection is achieved by using finger position measurements from a resistive fingerboard to inform audio analysis; the combination outperforming audio-only methods. Bow position and pressure are tracked using four optical reflectance sensors placed on the bow stick, allowing the displacement of the hair to be measured under the force of the string. Both sensor arrangements for this system can be fitted to existing violins without damaging the instrument. A case study demonstrating the utility of these techniques is presented finding fingered and bowed note onsets during performance.
Article
Full-text available
We present the design of a mobile augmented guitar based on traditional playing, combined with gesture-based continuous control of audio processing. Remote sound processing is enabled through our dynamically reconfigurable low-latency high-fidelity audio streaming protocol included in a mobile wearable wireless platform. Initial results show the suitability audio and sensors data forwarding over IEEE 802.11 networks for low-latency feedback that opens up new perspectives for mobile and multimodal performance.
Article
Full-text available
This article presents recent developments in actuated musical instruments created by the authors, who also describe an ecosystemic model of actuated performance activities that blur traditional boundaries between the physical and virtual elements of musical interfaces. Actuated musical instruments are physical instruments that have been endowed with virtual qualities controlled by a computer in real-time but which are nevertheless tangible. These instruments provide intuitive and engaging new forms of interaction. They are different from traditional (acoustic) and fully automated (robotic) instruments in that they produce sound via vibrating element(s) that are co-manipulated by humans and electromechanical systems. We examine the possibilities that arise when such instruments are played in different performative environments and music-making scenarios, and we postulate that such designs may give rise to new methods of musical performance. The Haptic Drum, the Feedback Resonance Guitar, the Electromagnetically Prepared Piano, the Overtone Fiddle and Teleoperation with Robothands are described, along with musical examples and reflections on the emergent properties of the performance ecologies that these instruments enable. We look at some of the conceptual and perceptual issues introduced by actuated musical instruments, and finally we propose some directions in which such research may be headed in the future.
Conference Paper
Acoustic instruments such as the violin excel at translating a performer’s gestures into sound in ways that can evoke a wide range of affective qualities. They require finesse when interacting with them, producing sound and music in an extremely responsive manner. This richness of interaction is simultaneously what makes acoustic instruments so challenging to play, what makes them interesting to play for long periods of time, and what makes overcoming that difficulty so worthwhile to both performers and listeners. Such an ability to capture human complexity, intelligence, and emotion through live performance interfaces is the core of what we are interested in salvaging from acoustic instruments, and bringing into the development of advanced HCI methods through the Musical Interface Technology Design Space, MITDS [12, 13].
Article
The keyboard is one of the most popular and enduring musical interfaces ever created. Today, the keyboard is most closely associated with the acoustic piano and the electronic keyboards inspired by it, which share the essential feature of being discrete: Notes are defined temporally by their onset and release only, with little control over each note beyond velocity and timing. Many keyboard instruments have been invented, however, that let the player continuously shape each note. This article provides a review of keyboards whose keys allow continuous control, from early mechanical origins to the latest digital controllers and augmented instruments. Two of the author’s own contributions will be described in detail: a portable optical scanner that can measure continuous key angle on any acoustic piano, and the TouchKeys capacitive multi-touch sensors, which measure the position of fingers on the key surfaces. These two instrument technologies share the trait that they transform the keys of existing keyboards into fully continuous controllers. In addition to their ability to shape the sound of a sustaining note, both technologies also give the keyboardist new dimensions of articulation beyond key velocity. Even in an era of new and imaginative musical interfaces, the keyboard is likely to remain with us for the foreseeable future, and the incorporation of continuous control can bring new levels of richness and nuance to a performance.
Article
This article discusses the augmentation of acoustic musical instruments, with a focus on trumpet augmentation. Augmented instruments are acoustic instruments onto which sensors have been mounted in order to provide extra sonic control variables. Trumpets make ideal candidates for augmentation because they have spare physical space on which to mount electronics and spare performer “bandwidth” with which to interact with the augmentations. In this article, underlying concepts of augmented instrument design are discussed along with a review and discussion of twelve existing augmented trumpets and five projects related to mouthpiece augmentation. Common aspects to many of these examples are identified, such as the prevalence of idiosyncratic designs, the use of buttons placed at or near the left-hand playing position, and the focus on measuring or mimicking trumpet valves. Three existing approaches to valve sensing are compared, and a novel method for sensing valve position, based on linear variable differential transformers, is introduced. Based on the review and comparison, we created an example augmented trumpet that tests the feasibility of a modular design paradigm. The results of this review of the state-of-the-art and our own research suggests future directions towards a better understanding of augmented trumpet design.