Conference PaperPDF Available

MyoSpat: A system for manipulating sound and light through hand gestures

Authors:

Abstract and Figures

MyoSpat is an interactive audiovisual system that aims to augment musical performances by empowering musicians and allowing them to directly manipulate sound and light through hand gestures. We present the second iteration of the system that draws from the research findings to emerge from an evaluation of the first system [1]. MyoSpat 2 is designed and developed using the Myo gesture control armband as input device and Pure Data as gesture recognition and audiovisual engine. The system is informed by human-computer interaction (HCI) principles: tangible computing and embodied, sonic and music interaction design (MiXD). This paper reports a description of the system and its audiovisual feedback design. We present an evaluation of the system, its potential use in different multi-media contexts and in exploring embodied, sonic and music interaction principles.
Content may be subject to copyright.
Proceedings of the 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017
MYOSPAT: A SYSTEM FOR MANIPULATING SOUND AND LIGHT THROUGH HAND
GESTURES
Balandino Di Donato and James Dooley
Integra Lab
Birmingham Conservatoire
{balandino,james}@integra.io
ABSTRACT
MyoSpat is an interactive audio-visual system that aims to
augment musical performances by empowering musicians
and allowing them to directly manipulate sound and light
through hand gestures. We present the second iteration of
the system that draws from the research findings to emerge
from an evaluation of the first system [1].
MyoSpat 2 is designed and developed using the Myo ges-
ture control armband as input device and Pure Data as ges-
ture recognition and audio-visual engine. The system is
informed by human-computer interaction (HCI) principles:
tangible computing and embodied, sonic and music interac-
tion design (MiXD). This paper reports a description of the
system and its audio-visual feedback design. We present an
evaluation of the system, its potential use in different multi-
media contexts and in exploring embodied, sonic and music
interaction principles.
1. INTRODUCTION
Performing with technology is often synonymous with
learning new skills that are at odds with musical skills,
potentially having negative and ‘disruptive’ effects [2].
MyoSpat is a gesture-controlled electronic interaction sys-
tem that aims overcome the ‘disruptive’, ‘highly complex’
nature of live electronic processing experienced by many
performers, providing them with an opportunity for new
expressive ideas. Bullock et al. [3] identify divergences
between the performer and technology caused by the lack
of familiarity with complex systems. This can create in a
dislocation between the performer’s gestures and the mu-
sical result. Lippe [4] emphasises the importance of al-
lowing musicians to interact confidently with technology
in order to present a musical and expressive performance.
With MyoSpat, we underline the importance of embodying
music [5], empowering performers to express their musi-
cal ideas through gestural control over any electronic part
in performance. Visual feedback can enhance the gesture-
sound relationship, playing a significant role in guiding the
user’s actions during performance [6] and strengthening the
perception of auditory feedback [7]. MyoSpat gives mu-
sicians control over sound manipulation through a direct
connection between hand gestures and audio-visual feed-
back, whilst making the newly learnt gestures as intuitive
and complementary to instrumental technique as possible.
Using motion tracking, we are able to efficiently map hand
gestures to audio-visual responses. By also tracking biodata
– such as EMG, EEG, blood flow and heartbeat – it is possi-
ble to establish stronger action-sound relationships [8] that
produce deeper understandings of the dynamics and mech-
anisms embedded in these actions [9]. Systems using myo-
graph data have emerged over the past two decades [10, 11],
with a number of recent works utilising the Myo armband,
demonstrating its reliability and appropriateness as an ex-
pressive gestural controller for musical applications [12,13].
2. THE SYSTEM
Developed through an iterative design cycle, MyoSpat’s de-
sign utilises context-based, activity-centred and emphatic
design approaches: interactions between users and medi-
ating tools are positioned within the motives, community,
rules, history and culture of those users [14].
MyoSpat 2 (outlined in Fig. 1) uses: (i) Myo armband as
an input device to track hand-gestures; (ii) Myo Mapper1
to extract and convert data from the Myo into Open Sound
Control (OSC) messages; (iii) Pd with the ml-lib2externals
for the gesture recognition and audiovisual signal elabora-
tion and spatialisation; and (iv) Arduino for converting se-
rial data into DMX signals that control lighting effects.
2.1. Interaction Design
MyoSpat 2’s interaction design (IxD) draws on mimetic the-
ories, embodied simulations [15] and metaphorical actions
[16]. This approach facilitates a direct connection between
the gestural interaction and the audio-visual feedback, as
sound is treated like a tangible object that can be grasped
and modified through continuous interactions. MyoSpat’s
IxD aims to (i) create a gesture vocabulary that enables in-
teraction with the system through meaningful gestures for
both performer and audience; (ii) produce a clear and strong
connection between gestures and audiovisual feedback; and
(iii) allow musicians to use the system through natural in-
teractions. The term natural here refers to the contextu-
1http://www.balandinodidonato.com/myomapper/
2https://github.com/cmuartfab/ml-lib
Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
Proceedings of the 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017
alised interaction with physical and virtual objects condi-
tioned from previous knowledge [17].
Arduino + DMX Shield
Reverb Pitch Shifter
Signal router (b)
SPAT
Lighting Loudspeakers
EMG MAV
Yaw
AM
Delay
Trajectory
Generator
Myo Mapper
Signal router (a)Mapping
ml.svm
EMG
Acc.
Yaw
Audio InMyo armband
Legend
Audio Data
Figure 1: MyoSpat implementation.
LEGEND:
Area 1, Clean Sound
Area 2, Reverb
Area 3, Pitch shift
Lower
Extend
Lower
Clean
Clean
Clean
Clean
FRONT
LEFTRIGHT
SPAT
SPAT
SPAT
SPAT
Extend
Figure 2: MyoSpat interactive areas, when the Myo arm-
band is worn on the left arm.
The MyoSpat gesture library includes six gestures. The
clean gesture is performed by orienting the arm towards the
front of the body and/or inwards towards the chest (Fig. 2,
area 1). It allows users to obtain a clean sound and to set
the lighting system colour to white. The extend gesture is
performed by orienting the arm outwards (Fig. 2, area 2),
allowing users to apply a long reverb to the sound. This
gesture sets the colour of the lights to blue. The lower ges-
ture is performed by lowering the arm towards the ground
(Fig. 2, area 3). It enables the user to pitch shift the sound
one octave lower, setting the colour of the lights to green.
The crumpling gestures are performed by hand movements
that repeatedly contract the forearm’s muscles, thus gener-
ating fluctuations in EMG data, as taken from previous ex-
periments with sound design in mixed realities. This ges-
ture applies amplitude modulation (AM) followed by delay
effects to the audio signal. As the forearm’s muscles be-
come more contracted AM depth and delay feedback are
increased, whilst delay time is shortened. Here the lights
have a strobing effect, where strobe time and intensity are
directly related to delay time and intensity. The pointing
gesture allows the user to spatialise the sound around the
audience. This gesture is performed by pointing at the lo-
cation from where the sound should come from. This user
interaction is an implementation of the same type of ges-
ture described in [18]. The brightness of each light is ad-
justed relative to the spatial position of the sound. The
throwing gesture involves a rapid movement of the arm, as
if throwing an object, enabling the user to spatialise the
sound through a circular trajectory. Once the gesture is
detected by the system, the duration of the audio effect is
determined by mapping arm acceleration and Myo’s EMG
mean absolute value, and trajectory direction is determined
by the yaw value. The brightness of each light is dynami-
cally adjusted in relation to the spatial position of the sound.
This gesture is inspired from previous works on approaches
controlling and visualising the spatial position of ‘sound-
objects’ [19]. The relationship between sound and gesture
refers to metaphor and mimetic theories [20] embedded in
the movements performed when the hand moves towards
each of the three areas, and not the pose assumed by the
hand once it has reached one of these areas. When perform-
ing the extend gesture, users move their arm outwards, thus
extending the area that the body covers within the space.
We try to represent the expansion of the body within the 3D
space extending the sound with a long reverb. We associate
the lower gesture with a pitch shift one octave lower, as a
result of lowering the arm.
2.2. Gesture recognition
Machine learning is used to recognise the clean,extend
and lower gestures. Specifically, we use the Support Vec-
tor Machine (SVM) algorithm from the ml-lib library for
Pd. Gesture identification probabilities are used for rout-
ing signals towards the pitch shifter, the reverb or to obtain
a clean sound. Crumpling,pointing and throwing gestures
are recognised and controlled through direct mapping.
2.3. Audio-lighting engine
The audio engine uses Pd patches from Integra Live’s3Pitch
Shifter and Reverb modules to control pitch shift and reverb
respectively. MyoSpat’s spatialiser uses the HOA library4
for distributing the direct and reverberated sound through
3http://integra.io/integralive/
4http://www.mshparisnord.fr/hoalibrary/en/
Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
Proceedings of the 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017
the four speakers. A low-pass filter is added to simulate air
absorption, and a delay line for establishing the time of ar-
rival of the sound from each loudspeaker. Spatialisation pa-
rameters are controlled by mapping the yaw value through
transfer functions.
The Pd patch controlling the lights maps and converts
values into serial data. Data is sent to an Arduino with Tin-
kerkit DMX Master Shield, which then connects directly to
the DMX lights.
3. EVALUATION
A direct evaluation approach [21] was adopted for assessing
the gesture recognition process. Usability and user experi-
ence have been evaluated through a user study conducted at
Berklee College of Music, Valencia Campus and at Integra
Lab, Birmingham Conservatoire.
3.1. Methodology
Demographic and background information was collected
through a semi structured interview. The system was then
demonstrated to the participant, after which they could prac-
tice with and learn about the system for a period of 10 min-
utes. Next, the usability test asked participants to perform
a series of sound manipulation tasks with the system. An
audio file containing the sound of water flowing was used
during this part of the study, with participants manipulat-
ing it using the system’s gesture set. During the usabil-
ity test, the gesture recognition accuracy was evaluated by
monitoring when the gesture recognition algorithm success-
fully detected each user interaction; participants were inter-
viewed through an informal, semi-structured interview to
gather qualitative data about their perception of the audiovi-
sual feedback. Lastly, participants were asked to fill a UEQ5
to quantitatively assess their experience using the system.
3.2. Results
The user study was attended by nine participants (seven mu-
sicians and two non-musicians). Results from the UEQ re-
ported that participants found MyoSpat ‘highly attractive’
and ‘stimulating’, finding it effective during the execution
of specific task as well as free exploration. Most partici-
pants had prior experience with interactive music systems,
potentially presenting a bias in our results. Results reported
in Fig. 3 shows a high accuracy of the gesture recogni-
tion system. However, due to bugs in the implementation
of the crumpling and throwing gestures mapping strategy,
the associated sound manipulations resulted harder to per-
ceive. Results from the UEQ (Fig. 4) report that MyoSpat
is a highly attractive and stimulating system, and was found
to be effective during the execution of specific tasks (prag-
matic quality) as well as free exploration (hedonic quality).
5http://www.ueq-online.org/
During the user study, participants perceived a strong link
between sound and gesture. They described light projec-
tions as enhancing the level of immersiveness and percep-
tion of the auditory feedback and its relation with hand ges-
tures. Musically skilled participants described MyoSpat as
being easy to use and incorporate into their performance
practice. One of the participants (professional dancer) high-
lighted MyoSpat’s potential application in dance perfor-
mance. All participants considered the interaction with the
audio file containing the sound of flowing water as natural
and embodied. Interestingly, the lower gesture made par-
ticipants interact with the sound as if they were submerging
their hand in a tub filled with water. Other gestures included
splashing and swirling water (see video6), demonstrating
MyoSpat’s potential to explore embodied interaction with
sonic objects, in line with similar research [22].
Figure 3: Usability study outcome.
Figure 4: User Experience Questionnaire outcome.
4. CONCLUSIONS
We have presented MyoSpat, an interactive hand-gesture
controlled system for creative audio manipulation in mu-
sical performance. Machine learning and mapping func-
tions were successfully implemented to recognise a number
of physical gestures, enabling audiovisual manipulations to
be mapped to each one of them. The current and previous
user studies demonstrate that the system can support musi-
cal improvisation and composition such as The Wood and
6https://vimeo.com/221800824
Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
Proceedings of the 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017
The Water by Eleanor Turner7and empowering users to
explore a novel range of embodied interactions during the
music making process. Results also demonstrate that the
Myo armband does not restrict user movements, and that
MyoSpat has the potential to be employed in different fields
such as dance and Virtual Reality.
5. REFERENCES
[1] B. Di Donato, J. Dooley, J. Hockman, J. Bullock, and
S. Hall, “MyoSpat: A hand-gesture controlled system
for sound and light projections manipulation,” in In-
ternational Computer Music Conference, Oct. 2017,
forthcoming.
[2] E. McNutt, “Performing electroacoustic music: a
wider view of interactivity,” Organised Sound, vol. 8,
pp. 297–304, Apr. 2004.
[3] J. Bullock, L. Coccioli, J. Dooley, and T. Michailidis,
“Live Electronics in Practice: Approaches to training
professional performers,” Organised Sound, vol. 18,
pp. 170–177, July 2013.
[4] C. Lippe, “Real-Time Interactive Digital Signal Pro-
cessing: A View of Computer Music,Computer Mu-
sic Journal, vol. 20, p. 21, Dec. 1996.
[5] A. Cox, “Embodying Music: Principles of the
Mimetic Hypothesis,” Society for Music Theory,
vol. 12, pp. 1–24, July 2011.
[6] L. Vainio, R. Ellis, and M. Tucker, “The role of visual
attention in action priming,” The Quarterly Journal of
Experimental Psychology, vol. 60, pp. 241–261, Feb.
2007.
[7] A. I. Goller, L. J. Otten, and J. Ward, “Seeing Sounds
and Hearing Colors: An Event-related Potential Study
of Auditory–Visual Synesthesia,Journal of Cognitive
Neuroscience, vol. 21, pp. 1869–1881, Oct. 2009.
[8] K. Nymoen, M. R. Haugen, and A. R. Jensenius,
“MuMYO — Evaluating and Exploring the MYO
Armband for Musical Interaction,” in International
Conference on New Interfaces for Musical Expression,
June 2015.
[9] B. Caramiaux, M. Donnarumma, and A. Tanaka,
“Understanding Gesture Expressivity through Muscle
Sensing,” ACM Transactions on Computer-Human In-
teraction, vol. 21, pp. 1–26, Jan. 2015.
[10] G. Dubost and A. Tanaka, “A Wireless, Network-
based Biosensor Interface for Music, International
Computer Music Conference (ICMC), Mar. 2002.
7https://vimeo.com/204371221
[11] A. Tanaka and R. B. Knapp, “Multimodal Interac-
tion in Music Using the Electromyogram and Relative
Position Sensing,” in International Computer Music
Conference, pp. 1–6, Apr. 2002.
[12] C. Benson, B. Manaris, S. Stoudenmier, and T. Ward,
“SoundMorpheus: A Myoelectric-Sensor Based Inter-
face for Sound Spatialization and Shaping, in Inter-
national Conference on New Interface for Musical Ex-
pression, pp. 332–337, July 2016.
[13] M. Weber and M. Kuhn, “Kontraktion, in Audio
Mostly, pp. 132–138, ACM Press, Oct. 2016.
[14] P. Dourish, Where the Action is: The Foundations of
Embodied Interaction. Bradford books, MIT Press,
2004.
[15] R. W. Gibbs, Artistic understanding as embodied
simulation,” Behavioral and Brain Sciences, vol. 36,
pp. 143–144, Mar. 2013.
[16] N. Schnell and F. Bevilacqua, “Engaging with
Recorded Sound Materials Through Metaphorical Ac-
tions,” Contemporary Music Review, vol. 35, pp. 379–
401, Jan. 2017.
[17] B. Leibe, T. Starner, W. Ribarsky, Z. Wartell, D. Krum,
B. Singletary, and L. Hodges, “The Perceptive Work-
bench: toward spontaneous and natural interaction in
semi-immersive virtual environments,” in IEEE Vir-
tual Reality 2000, pp. 13–20, IEEE Comput. Soc, Mar.
2000.
[18] J. Streeck, C. Goodwin, and C. LeBaron, “Embodied
Interaction in the Material World: An Introduction,” in
Embodied Interaction, Language and Body in the Ma-
terial World (J. Streeck, C. Goodwin, and C. LeBaron,
eds.), pp. 1–10, Cambridge, United Kingdom: Cam-
bridge University Press, Nov. 2011.
[19] J. Bullock and B. Di Donato, “Approaches to Visu-
alizing the Spatial Position of ’Sound-objects’,” Elec-
tronic Visualisation and the Arts, pp. 15–22, Jan. 2016.
[20] P. Wigham and C. Boehm, “Exploiting Mimetic The-
ory for Instrument Design,” in International Computer
Music Conference, pp. 1–4, May 2016.
[21] R. Fiebrink, P. R. Cook, and D. Trueman, “Human
model evaluation in interactive supervised learning,”
in Conference on Human Factors in Computing Sys-
tems, pp. 147–10, ACM Press, May 2011.
[22] E. O. Boyer, L. Vandervoorde, F. Bevilacqua, and
S. Hanneton, “Touching Sounds: Audio Virtual Sur-
faces,” in IEEE nd VR Workshop on Sonic Interactions
for Virtual Environments, pp. 1–5, Mar. 2015.
Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
... Specifically, three different arm poses (frontwards, outwards, downwards) and two different gestures (plucking and throwing) were classified using the ml.lib Support Vector Machine (SVM) classifier [9]. The orientation scaling functions were very important in the classification success for those works. ...
Conference Paper
Full-text available
Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It provides an easy to use tool for musicians to explore the Myo's potential for creating new gesture-based musical interfaces. Together with details of the software, this paper reports on projects realised with the Myo Mapper as well as a qualitative evaluation. We propose guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. We show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of raw data from the Myo data and work with high-level signal features for the realisation of interactive artistic and musical works. Myo Mapper: 1) Solves an IMU drift problem to allow multimodal interaction; 2) Facilitates an clear workflow for novice users; 3) Includes feature extraction of useful EMG features; and 4) Connects to popular machine learning software for bespoke gesture recognition.
This book constitutes late breaking papers from the 23rd International Conference on Human-Computer Interaction, HCII 2021, which was held in July 2021. The conference was planned to take place in Washington DC, USA but had to change to a virtual conference mode due to the COVID-19 pandemic. A total of 5222 individuals from academia, research institutes, industry, and governmental agencies from 81 countries submitted contributions, and 1276 papers and 241 posters were included in the volumes of the proceedings that were published before the start of the conference. Additionally, 174 papers and 146 posters are included in the volumes of the proceedings published after the conference, as “Late Breaking Work” (papers and posters). The contributions thoroughly cover the entire field of HCI, addressing major advances in knowledge and effective use of computers in a variety of application areas.
Chapter
This paper introduces a wearable electronic music controller, which enables the performer to play the musical instrument shakuhachi and control electronic music at the same time alone. We also exaggerate finger movements through visual feedback, help the audience understand the relationship between gesture and electronic music. It combines movement and stillness, and also tradition and modernity (Fig. 1).
Conference Paper
Full-text available
We present an innovative sound spatialization and shaping interface, called SoundMorpheus, which allows the placement of sounds in space, as well as the altering of sound characteristics, via arm movements that resemble those of a conductor. The interface displays sounds (or their attributes) to the user, who reaches for them with one or both hands, grabs them, and gently or forcefully sends them around in space, in a 360° circle. The system combines MIDI and traditional instruments with one or more myoelectric sensors. These components may be physically collocated or distributed in various locales connected via the Internet. This system also supports the performance of acousmatic and electronic music, enabling performances where the traditionally central mixing board, need not be touched at all (or minimally touched for calibration). Finally, the system may facilitate the recording of a visual score of a performance, which can be stored for later playback and additional manipulation. We present three projects that utilize SoundMorpheus and demonstrate its capabilities and potential.
Conference Paper
Full-text available
We present MyoSpat, an interactive system that enables performers to control sound and light projections through hand-gestures. MyoSpat is designed and developed using the Myo armband as an input device and Pure Data (Pd) as an audiovisual engine. The system is built upon human-computer interaction (HCI) principles; specifically, tangible computing and embodied, sonic and music interaction design (MiXD). This paper covers a description of the system and its audiovisual feedback design. Finally, we evaluate the system and its potential use in exploring embodied , sonic and music interaction principles in different multimedia contexts.
Conference Paper
Full-text available
This prospective study concerning the perception of audio virtual surfaces (AVSs) was inspired by two different research fields: sensory substitution and haptic and touch perception. We define an Audio Virtual Surface as a region of space that triggers sounds when the user touches it or moves into it. First, we describe an example of interactive setup using an AVS to simulate a sonic interaction with a virtual water tank. Then, we present an experiment designed to investigate the ability of blindfolded adults to discriminate between concave and convex AVSs using only the gesture-sound interaction. Two groups received different sound feedback, a static one indicating presence in the AVS, and a static+dynamic one (related to the component of the hand velocity tangential to the surface). In order to demonstrate that curvature direction was correctly perceived, we estimated their discrimination thresholds with a psychophysical staircase procedure. Results show that most of the participants were able to learn the task. The best results were obtained with the additional dynamic feedback. Gestural patterns emerged from the interaction, suggesting the use of auditory representations of the virtual object. This work proposes a contribution to the introduction in Virtual Reality of sonic interactions with auditory virtual objects. The setups we present raise new questions at both experimental (sensory substitution) and application levels (design of gesture-sound interaction for virtual reality).
Conference Paper
Full-text available
In this paper we present the rationale and design for two systems (developed by the Integra Lab research group at Birmingham Conservatoire) implementing a common approach to interactive visualisation of the spatial position of ‘sound-objects’. The first system forms part of the AHRC-funded project ‘Transforming Transformation: 3D Models for Interactive Sound Design’, which entails the development of a new interaction model for audio processing whereby sound can be manipulated through grasp as if it were an invisible 3D object. The second system concerns the spatial manipulation of ‘beatboxer’ vocal sound using handheld mobile devices through already-learned physical movement. In both cases a means to visualise the spatial position of multiple sound sources within a 3D ‘stereo image’ is central to the system design, so a common model for this task was therefore developed. This paper describes the ways in which sound and spatial information are implemented to meet the practical demands of these systems, whilst relating this to the wider context of extant, and potential future methods for spatial audio visualisation.
Article
Full-text available
Expressivity is a visceral capacity of the human body. To understand what makes a gesture expressive, we need to consider not only its spatial placement and orientation, but also its dynamics and the mechanisms enacting them. We start by defining gesture and gesture expressivity, and then present fundamental aspects of muscle activity and ways to capture information through electromyography (EMG) and mechanomyog-raphy (MMG). We present pilot studies that inspect the ability of users to control spatial and temporal variations of 2D shapes and that use muscle sensing to assess expressive information in gesture execution beyond space and time. This leads us to the design of a study that explores the notion of gesture power in terms of control and sensing. Results give insights to interaction designers to go beyond simplistic gestural interaction, towards the design of interactions that draw upon nuances of expressive gesture.
Article
Full-text available
Teaching live electronic music techniques to instrumental performers presents some interesting challenges. Whilst most higher music education institutions provide opportunities for composers to explore computer-based techniques for live audio processing, it is rare for performers to receive any formal training in live electronic music as part of their study. The first experience of live electronics for many performers is during final preparation for a concert. If a performer is to give a convincing musical interpretation ‘with’ and not simply ‘into’ the electronics, significant insight and preparation are required. At Birmingham Conservatoire we explored two distinct methods for teaching live electronics to performers between 2010 and 2012: training workshops aimed at groups of professional performers, and a curriculum pilot project aimed at augmenting undergraduate instrumental lessons. In this paper we present the details of these training methods followed by the qualitative results of specific case studies and a post-training survey. We discuss the survey results in the context of tacit knowledge gained through delivery of these programmes, and finally suggest recommendations and possibilities for future research.
Conference Paper
'Kontraktion' is an embodied musical interface using biosignals to create an immersive sonic performance setup. It explores the energetic coupling between digital synthesis and musical expression by reducing the interface to an embodied instrument and therefore tightening the connection between intention and sound. By using the setup as a biofeedback system the user explores his own subconscious gestures with a heightened sensitivity. Even subtle, usually unaware neural impulses are brought to conscious awareness by sensing muscle contractions with an armband and projecting them outward into space with sound in realtime. The users gestural expressions are embodied in sound and allow for an expressive energetic coupling between the users body and a virtual agent. Utilizing the newly adopted awareness of his body the user can take control of the sound and perform with it using the metagestures of his body as an embodied interface. The body itself is transformed into a musical instrument, controlled by neurological impulses and sonified by a virtual interpreter.
Article
This article presents a set of interactive audio applications that have been realised between 2009 and 2012. The applications have been created to foster the engagement of users in playing sound and music—alone or collectively. After introducing the underlying concepts and technical framework, the article briefly describes each scenario and discusses the involved metaphors.