Article

Seeking Out the Spaces Between: Using Improvisation in Collaborative Composition with Interactive Technology

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This article presents findings from experiments into piano and live electronics undertaken by the author since early 2007. The use of improvisation has infused every step of the processboth as a methodology to obtain meaningful results using interactive technology and as a way to generate and characterize a collaborative musical space with composers. The technology used has included pre-built MIDI interfaces such as the PianoBar, actuators such as miniature DC motors and sensor interfaces including iCube and the Wii controller. Collaborators have included researchers at the Centre for Digital Music (QMUL), Richard Barrett, Pierre Alexandre Tremblay and Atau Tanaka.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In 1990, Knapp and Lusted [26] proposed Biomuse, a bioelectric controller which used electroencephalography (EEG), electrooculography (EOG) and electromyography (EMG) as input modalities. Besides EMG, other bio-physical devices have been researched in the context of HCI, most notably in the domain of self-expression, e.g. as musical performance [32]. Donnarumma extensively explored mechanomyography (MMG) as one modality [14], while his recent work, Creativity and Design Support Tools DIS '20, July 6-10, 2020, Eindhoven, Netherlands together with his co-authors Caramiaux and Tanaka, also included EMG as an input channel [15,7], contrasting the two technologies: EMG provided better locality. ...
... Working with Balandino has given me a direct means of communicating with the audience through a more physical performance style and the use of multiple effects and lights, intrinsically linked with the musical creation. (Eleanor Turner, personal communication, Birmingham 2017) Similar comments were also reported by pianist Sarah Nicolls, when playing Suspensions by Atau Tanaka (Nicolls 2010(Nicolls , 2011. In particular, she was able to process the sound of the piano by engaging and disengaging muscles of her arms during performance. ...
Article
The goal of our research is to provide harpists with the tools to control and transform the sounds of their instrument in a natural and musical way. We consider the development of music with live electronics, with particular reference to the harp repertoire, and include interviews with six harpists that use technology in their professional performance practice. We then present HarpCI, a case study that explores how gestures can be used to control and transform sound and light projection in live performance with the electric harp. HarpCI draws on research from the areas Human Computer Interaction (HCI) and Music Interaction Design (MiXD) to extend the creative possibilities available to the performer, and demonstrates our approach to bridging the gap between the performer/composer and the harp on one side, and the technology on the other. We discuss the use of guitar pedals with the electric harp, and the limitations they impose, and then introduce the MyoSpat system as a potential solution to this issue. MyoSpat aims to give musicians control over auditory and visual aspects of the performance through easy to learn, intuitive and natural hand gestures. It also aims to enhance the compositional process for instrument and live electronics, through a new way of music notation for gesturally controlled interactive systems. The system uses the Myo® armband gestural controller, a device to control live sound processing that is non-invasive to instrumental technique and performer. The combination of these elements allows the performer to experience a tangible connection between gesture and sound production. Finally, we describe the experience of Eleanor Turner, who composed and performed The Wood and the Water (Turner 2016) using MyoSpat, and we conclude by presenting the outcomes from HarpCI workshops delivered at Cardiff Metropolitan University for Camac Harp Weekend, Royal Birmingham Conservatoire’s Integra Lab and Southampton University.
... After the students started to get excited, they would want to improve their play based on their own interpretation and melody (developed). After a bit of investigation, students stated that [11]; Based on the students' statement, the positive impact of exploration activity was that students were able to gain new learning experiences, to explore their ability, to practice their musicality, and to get used to participate actively in developing learning material given by teacher. Yet, the exploration might be inhibited if there were less stimuli and motivation from teacher. ...
... Electronics have been used for sound processing, sensors have been added, and pianists have reached inside the instrument to interact directly with the strings. But in my title, I refer not to Sarah Nicolls' Inside out piano [Nicolls, 2010], but to what Andrew McPherson did conceptually to the piano keyboard with the introduction of his TouchKeys sensor system . ...
Book
What is a musical instrument? What are the musical instruments of the future? This anthology presents thirty papers selected from the fifteen year long history of the International Conference on New Interfaces for Musical Expression (NIME). NIME is a leading music technology conference, and an important venue for researchers and artists to present and discuss their explorations of musical instruments and technologies. Each of the papers is followed by commentaries written by the original authors and by leading experts. The volume covers important developments in the field, including the earliest reports of instruments like the reacTable, Overtone Violin, Pebblebox, and Plank. There are also numerous papers presenting new development platforms and technologies, as well as critical reflections, theoretical analyses and artistic experiences. The anthology is intended for newcomers who want to get an overview of recent advances in music technology. The historical traces, meta-discussions and reflections will also be of interest for longtime NIME participants. The book thus serves both as a survey of influential past work and as a starting point for new and exciting future developments.
... The concert can use acoustic and electronic instruments along with additional sounds and processing by Soundcool, controlled by devices like tablets or smartphones. The designs of the different scenarios and the use of the Soundcool system allowed us to develop different projects, see [7], which are presented in this paper. ...
Article
Full-text available
This paper exposes the possibilities of creating a collaborative network website for our technologic and educational music project: Soundcool. It means a new model for music education based on the use of this application, a modular system with smartphones, tablets and Kinect developed by Universitat Politècnica de València (UPV) through several grants from UPV, Generalitat Valencianaand Carasso Foundati on (Spain). Soundcool has been programmed in Max, a modul ar graphical programming environment for music and interactive multimedia creation, and uses Open Sound Control, designed to share information in real time over a network with several media devices. Our application is a creative development environment in its own right, but for running Max patches it requires only the free applicati on Max Runtime/Max player. The pedagogical architecture of Soundcool is based on three music education scenarios that allow interaction between the various agents involved in the classroom. Soundcool is being used as a music educational tool in several European countries through an Erasmus + European project. For this reason, creating a virtual networking website helps all the users to interconnect and to share experiences and media about the app. On this way, the Soundcool users interaction by sharing in a virtual site means a new research methodology, which enables us to find out more possibilities for the application in all the creative and pedagogical ways.
... The concert can use acoustic and electronic instruments along with additional sounds and processing by Soundcool, controlled by devices like tablets or smartphones. The designs of the different scenarios and the use of the Soundcool system allowed us to develop different projects, see [7], which are presented in this paper. ...
Conference Paper
Full-text available
This paper proposes a new model for music education based on the use of the application Soundcool, a modular system for music education with smartphones, tablets and Kinect developed by Universitat Politècnica de València (UPV) through UPV (2013, Spain) and Generalitat Valenciana (2015-2016, Spain) projects. Soundcool has been programmed in Max, a modular graphical programming environment for music and interactive multimedia creation, and uses Open Sound Control, designed to share information in real time over a network with several media devices. Our application is a creative development environment in its own right, but for running Max patches it requires only the free application Max Runtime/Max player. The pedagogical architecture of Soundcool is based on three music education scenarios that allow interaction between the various agents involved in the classroom. Soundcool is going to be used as a music education tool in several European countries through an Erasmus+ European project.
Thesis
Full-text available
This project investigates the process of creating new works for two jazz trio ensembles, with a particular emphasis on improvisation with acoustic instruments and technology. Utilising a practice-based research model the project documents and outlines the conceptual basis for the work, reflects on a series of public performances and examines studio recording sessions. By analysing the musical content, use of technology, and the musician’s reflections on their decision making, the overall goal is to articulate the musical potential of improvising with technology in a jazz context. Exploring technology and developing extended techniques towards a hybrid acoustic- electronic “group sound” that is distinct but still recognisable as jazz, is a core focus of this research. Specific software, hardware controllers, and audio effects are identified, and an analysis of the ways in which technologies are engaged by each musician is presented. Artistic reference points identify current and historical practice within this area and a range of case studies give context for how the music created here is relevant to contemporary jazz in Australia. The resulting musical output is documented in audio and video formats and includes multiple performer analyses, enabling detailed examination by the reader of how each musician merges improvisation using acoustic instruments and improvisation with technology. Ultimately this research has allowed two professional jazz ensembles to forge new musical pathways, creating expanded practical skills for the author and the musicians involved. This research will be of interest to jazz musicians seeking to broaden their practice through improvisation with technology. Additionally, the project is relevant to any reader/musician engaged with improvisation in contemporary music more broadly.
Thesis
Full-text available
Interfaces for musical expression are widely used for controlling and transforming sound in live performance. They aim to facilitate the interaction with a computer and empower the performer with a more expressive control over the sound. However, the actions made to control them have the potential to interfere with the musical performance, in relation to the instrumental technique, choreographic aspects or the physical characteristics of the played musical instrument. To avoid this issue, modes of interaction and various devices have been designed and utilised in conjunction with interactive audio and visual software to control and transform audiovisual media. In particular, gesture sensing technologies have been successfully used in different musical applications. However, they, in turn, raise questions such as, how can musicians most effectively control and transform auditory, visual and lighting effects during a live performance through gesture? What interaction design considerations should be made that allow performers to interact simultaneously with an instrument and audio-visual-lighting processing? How can disruption during a live performance with embodied human-computer interactions be reduced? The work presented in this thesis investigates modes of interaction with sound, visual projection and lighting effects during a musical performance that may result natural and embodied, and not dependent from a particular musical instrument, its sound or instrumental technique. For this purpose, using a User-Centred Design method, I realised `MyoSpat' upon Music and Human-Computer Interaction principles. MyoSpat is an interactive system, which embeds Inertial Measurement Unit (IMU) and Electromyography (EMG) technology, for gesturally controlling audio and lighting processes during a musical performance. As part of this research, I also created Myo Mapper, a Thalmic Labs' Myo to Open Sound Control (OSC) messages mapper. Outcomes of this research are presented in this thesis and through a portfolio of performances realised in collaboration with musicians.
Chapter
Full-text available
The concern of this chapter is the pedagogical value of technology-based improvisation in University music programmes. In essence the chapter is a call to remember the pedagogical imperatives and values of improvisation in music education in the light of the impact of advanced technocratic trends in education and changes in student competencies and kinds of literacy related to technology.
Chapter
Full-text available
This paper addresses four criteria that the Soundcool project meets: to “be sustainable”, “be future-oriented”, “be transformative” and “be innovative”. Soundcool is a pedagogical and technological project. A brief description of the technology behind Soundcool will be useful for the reader before addressing the four criteria. Soundcool is like a “Lego” for sound; Soundcool is composed of a series of software modules that run on a central computer, or host computer. Each module is sort of a musical instrument; it could be a synthesizer, a sampler, a sound effect processor, etc. these modules can be interconnected in different ways allowing the users, i.e. the students, to create their own arrangements, as we call the module creations and interconnections. Then, each module can be controlled either with the mouse or, what is more interesting, with a mobile device through WiFi. This way, every student can control one or several modules of the whole arrangement from their mobile device contributing to a collaborative and participative experience.
Chapter
Capacitive touch sensing is increasingly used in musical controllers, particularly those based on multi-touch screen interfaces. However, in contrast to the venerable piano-style keyboard, touch screen controllers lack the tactile feedback many performers find crucial. This paper presents an augmentation system for acoustic and electronic keyboards in which multi-touch capacitive sensors are added to the surface of each key. Each key records the position of fingers on the surface, and by combining this data with MIDI note onsets and aftertouch from the host keyboard, the system functions as a multidimensional polyphonic controller for a wide variety of synthesis software. The paper will discuss general capacitive touch sensor design, keyboard-specific implementation strategies, and the development of a flexible mapping engine using OSC and MIDI.
Article
The design of a digital musical instrument is often informed by the needs of the first performance or composition. Following the initial performances, the designer frequently confronts the question of how to build a larger community of performers and composers around the instrument. Later musicians are likely to approach the instrument on different terms than those involved in the design process, so design decisions that promote a successful first performance will not necessarily translate to broader uptake. This article addresses the process of bringing an existing instrument to a wider musical community, including how musician feedback can be used to refine the instrument's design without compromising its identity. As a case study, the article presents the magnetic resonator piano, an electronically augmented acoustic grand piano that uses electromagnets to induce vibrations in the strings. After initial compositions and performances using the instrument, feedback from composers and performers guided refinements to the design, laying the groundwork for a collaborative project in which six composers wrote pieces for the instrument. The pieces exhibited a striking diversity of style and technique, including instrumental techniques never considered by the designer. The project, which culminated in two concert performances, demonstrates how a new instrument can acquire a community of musicians beyond those initially involved.
Article
Full-text available
The evaluation plays a significant role in the process of designing digital musical instruments (DMI). Models are representations of systems or artifacts that provide a means of reflecting upon the design or behavior of a system. Taxonomies are often used in human-computer interaction (HCI) as a means of categorizing methods of design or evaluation according to characteristics that they have in common. The development of performance practice and a dedicated instrumental repertoire along with the evolution of a DMI allow performers and composers to become participants in shaping the function, form, and sound of the instrument. Performers are the only people who can provide feedback on an instrument's functioning in the context for which it was ultimately intended, that of live music making. It is often necessary to probe interaction designs at the task level, particularly in order to evaluate two possible options for a given design, or to probe the mental model that a user is constructing of a given interaction task.
Article
The advent of computing stimulates a desire to re-examine the subject of creativity. Though the computer can replace man in the production of graphic images, its function in the arts is seen as assisting in the specification of art systems and in their subsequent real-time management. An art of system or process is placed in the context of primarily the visual or plastic arts but the authors disavow concern with any 'new' or 'modern technological' art. Various types of art systems are mentioned and advantages of the fully interactive one are considered. It is pointed out that the inclusion of complex real-time responses in an interactive art system can frequently make use of a computer. In such work, the artist and the viewer play an integral part. The traditional role of the artist, composer or writer is thus called into question; it may no longer be necessary to assume that he is a specialist in art--rather he is a catalyst of creative activity. Three cases are discussed to illustrate the applications of this approach. /// L'avènement de l'informatique exige que l'on reconsidère le problème de la créativité. Bien que l'ordinateur puisse remplacer l'homme dans la production d'images graphiques, on peut penser que sa fonction dans les arts est d'aider à spécifier les systèmes artistiques, et à les répartir ultérieurement dans le temps réel. Un art du système et du processus se situe d'abord dans le contexte des arts visuels ou plastiques; cependant la préoccupation des auteurs n'est d'aucune manière un art 'nouveau' ou 'technologique'. Ils citent plusieurs sortes de systèmes artistiques et décrivent les avantages de celui qui est entièrement 'interactif'. Ils font remarquer que l'inclusion de réponses complexes du temps réel dans un système artistique interactif peut souvent appeler l'utilisation d'un ordinateur. Dans une telle œuvre, l'artiste et le spectateur jouent un rôle à part entière. Le rôle traditionnel de l'artiste, du compositeur ou de l'écrivain est ainsi remis en question; il n'est plus nécessaire d'affirmer que l'artiste est un spécialiste--il est plutôt le catalyseur d'une activité créatrice. Trois exemples illustrent les applications de cette approche.
Article
The author considers the absence of the artist's body in electronic music, a missing element that he finds crucial to the success of any work of art. In reviewing the historical development of electronic music from musique concrète to analog and then digital synthesizers, the author finds that the attainment of increased control and flexibility has coincided with the reduction of identifiable bodily involvement by the performer. He contrasts this trend with the highly physical intervention and manipulation, first practiced with atypical electronic instruments such as the theremin, subsequently introduced to the electric guitar by Jimi Hendrix and his followers, and then to vinyl by turntable artists. He concludes that the tension between body and machine in music, as in modern life itself, can only exist as an experience to examine and criticize and not as a problem to resolve.