ArticlePDF Available

Real-time interaction among composers, performers, and computer systems

Authors:

Abstract

As the computer becomes more and more ubiquitous in society, the term "interactive" has become a widely used, but misunderstood term. In this paper, I discuss definitions of interactive music and interactive music systems, performance issues in interactive music, and performer/machine relationships that engender interaction in an attempt to explain how and why I pursue this discipline. Furthermore, I describe the function of computers in my compositions, and the manner in which I explore performer/machine interaction. Personal Background Prior to 1980, I was equally active as both an instrumental and electronic music composer. I was originally drawn to the computer because of a strong interest in designing new sounds, something at which computers are quite good, and exploring algorithmic compositional structures, simulation being another particular strength of computers. While new sounds and compositional algorithms can be readily explored in a non-real-time environment, and the tape and instrument paradigm has existed since the beginnings of electronic music, my first experiences with electronic music were with analog synthesizers, and my earliest experiences with computers in the 1970s were predominately with real-time systems. In addition, during the second half of the 1970s, I worked with an improvisational music/dance ensemble exploring live-electronics combined with acoustic instruments. For the past 20 years, I have pursued creative and research interests in interactive computer music involving live instrumentalists and computers in performance situations. The opportunity to combine instruments and computers became increasing practical as real-time digital signal processors became more available in the 1980s. As the level of refinement of real-time control became proportionally greater, it became evident that composed music for instruments and interactive computer systems was a viable medium of expression. And while it is true that interactive computer music is a relatively new area in the electronic music field, developments in the power of desktop computers and the sophistication of real-time software have been responsible for enormous growth of the genre in the last ten years. The challenge of interactive computer music has been to articulate sonic design and compositional structures in some sort of interactive relationship with live performers. While some composers use computers to model or imitate musical instruments, and others are interested in modeling human performance, I am not interested in replacing either instruments or performers. Musicians, with their years of experience playing on instruments which have often developed over centuries, offer rich musical and cultural potential, and are perhaps the ultimate driving force for me as a composer working with computer technology.
Real-Time Interaction Among Composers, Performers, and Computer Systems
Cort Lippe
The Hiller Computer Music Studios
Department of Music, University at Buffalo
Buffalo, New York, USA 14260
lippe@buffalo.edu
Abstract
As the computer becomes more and more ubiquitous in society, the term “interactive” has become
a widely used, but misunderstood term. In this paper, I discuss definitions of interactive music and
interactive music systems, performance issues in interactive music, and performer/machine
relationships that engender interaction in an attempt to explain how and why I pursue this
discipline. Furthermore, I describe the function of computers in my compositions, and the manner
in which I explore performer/machine interaction.
Personal Background
Prior to 1980, I was equally active as both an instrumental and electronic music composer. I was
originally drawn to the computer because of a strong interest in designing new sounds, something
at which computers are quite good, and exploring algorithmic compositional structures, simulation
being another particular strength of computers. While new sounds and compositional algorithms
can be readily explored in a non-real-time environment, and the tape and instrument paradigm has
existed since the beginnings of electronic music, my first experiences with electronic music were
with analog synthesizers, and my earliest experiences with computers in the 1970s were
predominately with real-time systems. In addition, during the second half of the 1970s, I worked
with an improvisational music/dance ensemble exploring live-electronics combined with acoustic
instruments. For the past 20 years, I have pursued creative and research interests in interactive
computer music involving live instrumentalists and computers in performance situations. The
opportunity to combine instruments and computers became increasing practical as real-time digital
signal processors became more available in the 1980s. As the level of refinement of real-time
control became proportionally greater, it became evident that composed music for instruments and
interactive computer systems was a viable medium of expression. And while it is true that interactive
computer music is a relatively new area in the electronic music field, developments in the power of
desktop computers and the sophistication of real-time software have been responsible for enormous
growth of the genre in the last ten years.
The challenge of interactive computer music has been to articulate sonic design and compositional
structures in some sort of interactive relationship with live performers. While some composers use
computers to model or imitate musical instruments, and others are interested in modeling human
performance, I am not interested in replacing either instruments or performers. Musicians, with
their years of experience playing on instruments which have often developed over centuries, offer
rich musical and cultural potential, and are perhaps the ultimate driving force for me as a composer
working with computer technology.
Interactive Music
Robert Rowe, in the seminal book Interactive Music Systems [Rowe, 1993], states: “Interactive
music systems are those whose behavior changes in response to musical input”. A dictionary
definition of the word interactive states: “capable of acting on or influencing each other”. This
would imply that a feedback loop of some sort exists between performer and machine. Indeed, the
Australian composer Barry Moon suggests: “levels of interaction can be gauged by the potential
for change in the behaviors of computer and performer in their response to each other”. [Moon,
1997]. George Lewis, a pioneer in the field of interactive computer music, has stated that much of
what passes for interactive music today is in reality just simple event “triggering”, which does not
involve interaction except on the most primitive level. He also states that since (Euro-centric)
composers often strive for control over musical structure and sound, this leads many composers to
confuse triggering with real interaction. He describes an interactive system as “one in which the
structures present as inputs are processed in quite a complex, multi-directional fashion. Often the
output behavior is not immediately traceable to any particular input event.” [Rowe et al, 1992-93].
David Rokeby, the Toronto-based interactive artist, states that interaction transcends control, and in
a successful interactive environment, direct correspondences between actions and results are not
perceivable. In other words, if performers feel they are in control of (or are capable of controlling)
an environment, then they cannot be truly interacting, since control is not interaction. [Rokeby,
1997]. Clearly, on a continuum from triggering to Rokeby’s non-control there is a great deal of
latitude to loosely label a wide range of artistic approaches and human/machine relationships as
“interactive”.
Fortunately, a composer can assign a variety of roles to a computer in an interactive music
environment. The computer can be given the role of instrument, performer, conductor, and/or
composer. These roles can exist simultaneously and/or change continually, and it is not necessary
to conceive of this continuum horizontally. On a vertical axis everything from simple triggering to
Rokeby-like interaction can have the potential to exist simultaneously. If performer and machine are
to be equal in an interactive environment, one could argue that the performer should also be offered
a similar variety of roles. A performer is already a performer, and already plays an instrument;
therefore, a composer can assign the role of conductor and composer to a performer. The conductor
role comes quite naturally to a performer in an interactive environment, and is commonly exploited.
The composer role is more problematic: composers exploring algorithmic processes are often
willing to allow aspects of a composition to be decided via a computer program, as this is inherent
in the very nature of certain approaches to algorithmic composition. But some composers question
the idea of allowing a performer to take on the role of composer, and usually this involves giving
the performer some degree of freedom to improvise. While I am not attempting to equate performer
improvisation with algorithmic procedures, there are certainly interesting connections between the
two which are outside the scope of this paper. But if we look closely at the various explanations of
interactive music, most of them imply or clearly involve some shifting of the composer’s
“responsibilities” towards the computer system, towards the performer, or both. Interaction is a
very complex subject, and we are probably only at the beginnings of a discussion of
human/machine relationships, which I suspect will continue to develop over many years.
Musical Interactivity
While this discussion about interaction is fascinating on aesthetic, philosophical, and humanistic
levels, at the practical level of music-making the quantity or quality of human/machine interactivity
that takes place is much less important to me than the quality of musical interactivity. Musical
interactivity is something that happens when people play music together. Rich musical interactivity
exists in music that has no computer or electronic part, and can even be found in instrument/tape
pieces (albeit only the musician is actively interacting), so the level of interactivity of a computer
system is really a secondary consideration from a musical point of view. While I stated that I am
not interested in modeling performers or instruments, I am interested in using the computer to
model the relationship that exists between musicians during a performance. This relationship
involves a subtle kind of listening while playing in a constantly changing dialogue among players,
often centering on expressive timing and/or expressive use of dynamics and articulation. We often
refer to this aspect of music as “phrasing”, an elusive, yet highly important part of performance.
While concepts of musical interpretation exist in solo performance, and one can discuss a solo
performer’s “interaction” with a score, the musical interactivity that exists when two or more
musicians play together is a more appropriate model for describing interaction between musicians
and computers in real-time.
Musical Expression in Performance
At its core, European music notation has developed as a Cartesian coordinate system in which
precise and measurable information about just two parameters, frequency and time, are specified. In
a typical performance we assume that frequency will be respected, usually rather precisely, while we
assume that time will be respected in a less precise manner. But, within certain boundaries, variants
in both frequency and time are expected and considered part of an expressive performance. Vibrato,
portamento, rubato, accelerando, ritardando, etc., are all common markings used by composers and
are part of the interpretive toolbox which a performer is expected to exploit in transforming
frequency and time. Some composers prefer not take responsibility for notating subtle variations in
frequency and time beyond the notated pitches and rhythms, so that expressive decisions are left to
the performers’ discretion. Other composers, who perhaps prefer to rely less on cultural
conventions, specify variations of pitch and time in greater detail. Beyond frequency and time
notation there exists an enormous variety of imprecise notation that can be found in any glossary of
musical terms: staccato, legato, mezzo-forte, crescendo, sul ponticello, etc. Since these notations are
not as easily measurable as pitch and time, they are considered more in the domain of performers.
More importantly, a performance is judged, not by whether the pitches and rhythms are correct (this
is a given), but by how well the performer expressively alters pitches to a small degree, and rhythm
to a larger extent, while interpreting expressive markings pertaining to parameters of loudness,
timbre, articulation, etc., with a rather large degree of freedom. The interpretation of these
parameters is subjective and relative. The loudness scale (the difference between mezzo-forte and
mezzo-piano); the interpolation between various loudness specifications (linear or exponential
crescendo or decrescendo); the way in which pitches connect from one to another (portamento), the
articulation of notes (staccato, legato, etc.), and timbral nuances (sul tasto, sul ponticello, etc.) are all
very subjective parameters. More importantly, the way time can be contracted and expanded
(rallentando, accelerando, rubato, etc.), and the way groups of notes can be organized into phrases,
making use of all of the abovementioned subjective parameters, are highly valued aspects of a
performer’s interpretation of a piece. If we consider a musical score, the performer, and instrument
as components of a “complex system”, the musical interpretation of a set of instructions executed
by a performer in real-time is both arbitrary and self-governing. The written score is ordered in
time, but the way the score is performed in its details is beyond the composer’s control. A given
interpretation of a notated score is the piece of music. In the hands of a skilled performer, each
performance of a piece is subtly different. The score is simply a road map, a set of instructions
used to produce the music. And, although I began this paragraph with the qualification
“European”, if the word “score” is replaced with “set of rules, conventions, and/or customs”,
almost any musical performance can be discussed qualitatively by listeners familiar with the
traditions of a particular musical culture.
Detection
Using a microphone and an analog-to-digital converter as input data, computers can track
parameters of instruments, such as frequency, amplitude, and spectrum. This information can be
combined to derive further details including silence, articulation, tempi, and timbre. Further
programming allows a computer to make distinctions about more “musical” parameters. Pitch
detection is principle in determining vibrato, portamento, and glissando. Amplitude detection
predominates determinations such as staccato, legato, mezzo-forte, crescendo, rubato, accelerando,
and ritardando. Spectral measurements determine sul ponticello, sul tasto, pizzicato, sordino, multi-
phonics, and changes in instrumentation. Sensors can also be employed to track physical motion,
which can aid in the determination of phrasing information. The ability to recognize what a
musician is doing during a performance can be very useful. A composer is not limited to simple
determinations about pitch and time during a performance, but also can collect data about musical
interpretation of a score. This information can be used for a variety of purposes: to trigger specific
electronic events, to continuously control or affect computer sound output by directly affecting
digital synthesis algorithms, to affect the behavior of compositional algorithms, etc. Performers find
themselves in situations where their performance has a variety of influences on the electronic
output. More importantly, subtle aspects of their musical interpretation affect the computer part.
There is a greater chance that a musician’s performance can be influenced by the electronic output
if the musician senses the effect of his/her playing on the computer part, thereby creating a
feedback loop between performer and machine. The ability to recognize what a musician is doing,
on as many levels as possible, gives a composer more ways to enrich an interactive environment.
Thoughtful high-level event detection and subtle performance nuances can be used to directly affect
the electronic output of a computer, much in the same way that performers affect each other in the
chamber music paradigm of concert music.
Compositional Approach
My compositions for instruments and computer have an underlying structure that cannot be altered.
Most pieces do not include improvisation. Electronic and instrumental events are ordered and their
unfolding is teleological. The computer “score” has events that are analogous to the pitch/rhythm
parameters of traditional musical notation, in that these events are essentially the same for each
performance. But, overlaid on this skeletal computer score, a great diversity of potential for variety
exists, so that while the computer part is pre-composed conceptually, the performer shapes the final
version during a concert. The computer score functions in much the same way as the musician’s
score. (I apologize to those who might hope for something more original conceptually. I would
propose that 99.99% of what is done with computers is purely a simulation of a tool or activity that
already exists, and that the other 0.01% probably represents real breakthroughs in human thinking.)
Since the performer's musical interpretation of the written score directly affects the electronic score,
and since the performer’s interpretation is different each performance, the electronic part is
different each performance. The performer is "empowered" by being given the possibility to control
and interact with the computer to produce the final computer output, and while I maintain my
European concept of "ownership" as composer, the performer creates the final musical artifact.
Increasing the degree of interactivity (or autonomy) might be seen as a way of reducing my control
as composer. A completely closed interactive feedback loop negates the concept of control of a
system since control is not interaction. In this sense, my music has certain limitations. At times, the
performer controls (triggers events) more than interacts. As the composer, if I were to give both the
performer and the machine more autonomy, a self-governing interactive environment could be
created. But, my intention is not to create an interactive environment, nor to create a meta-musical
environment, but to compose a piece of music in which the performer is offered the possibility of
interacting with a computer system much as he/she interacts with other musicians. The feedback
loop between performer and machine is based on the software environment that I build and on the
abilities of the performer to react to what he/she hears coming from the electronic part. This
interaction gives the performer both greater responsibility and more control over the environment. I
would hope that, via the music, this more “intimate” relationship between performer and machine
can be heard and sensed by listeners.
As mentioned above, the computer can be given various roles, horizontally or vertically, of
instrument, musician, conductor, and composer; and the performer can be given the roles of
conductor and composer. In addition, the performer/computer relationship can move on a
continuum (also horizontally or vertically) between the poles of an extended solo instrument to a
multi-voiced or multi-part chamber ensemble. That is to say, musically, the computer part can be, at
times, inseparable from the instrumental part, serving rather to “amplify” the instrument in many
dimensions and directions; while at the other extreme of the continuum, the computer part can have
its own independent musical material. In addition, within this relationship, another continuum
exists: the sounds in the electronic part can be derived directly from the composed instrumental
part, so that, certain aspects of the musical and sound material of the instrumental and electronic
parts are one and the same, while at the other end of this continuum, the electronic part can be
derived from other sources, which could include synthesis or sound material other than the
instrument’s sound material.
Human/Machine Relationships
Humans have a rather complicated and intertwined conception of what is human-like and what is
machine-like. We spend a great deal of time trying to discipline ourselves to perform like machines:
our ideal of technical perfection and efficiency is something akin to our idea of a perfectly working
machine, and yet, we also have another entirely negative viewpoint towards anything human that is
too machine-like. Our relationship with machines becomes more and more complex as our contact
with machines increases in daily life. While I feel it is important to explain technical issues to
performers in order to increase their understanding of the various kinds of interaction possible in a
performance, I, nevertheless attempt to offer them a certain degree of transparency in their
relationship with the computer, so that they may be free to perform on stage as they would with
other performers, concentrating on musical issues. My aim is not to put performers in a
technological context, but to place technology in an artistic context.
Conclusion and Future Work
A dynamic relationship among performers, musical material, and computers can enrich the musical
level of a performance for composers, performers, and listeners alike. Compositions can be fine-
tuned to the individual performing characteristics of different musicians; performers and computers
can interact expressively, and musicians can readily sense the consequences of their performance
and musical interpretation.
As a composer, it is difficult to predict what direction my future work will take. In addition to my
work as a composer, I am active in creating interactive pieces with performers that are collaborative
improvisations in which I “perform” the computer part. But the software environment for these
improvisations is similar to the environment I use for my composed pieces. Of course, while
numerous research directions, such as, interactive improvisation environments, interactive computer
performer agents, electromechanical control, robotics and AI, gesture tracking via sensors and
cameras, real-time graphics and video, etc., all inform my work, there is still much to be done in the
research areas in which I am presently active. Specifically, the topics of analysis, processing,
resynthesis, and spatialization in the spectral domain [Lippe, 1999] continue to be areas of great
potential.
References
Rowe, R., 1993. “Interactive Music Systems” The MIT Press, Cambridge, Massachusetts, pp. 1.
Moon, B., 1997, “Score Following and Real-time Signal Processing Strategies in Open-Form
Compositions”, Information Processing Society of Japan SIG Notes, Vol. 97, No. 122,
pp. 12-19.
Rowe, R. et al, 1992-93. “Putting Max in Perspective” Computer Music Journal, The MIT Press,
Cambridge, Massachusetts, vol.17, No. 2, pp. 2-6.
Rokeby, D., 1997. Public lecture and private discussions.
Lippe, C. and Settel, Z., 1999. “Low Dimensional Audio Rate Control of FFT-Based Processing”
IEEE ASSP Workshop Proceedings, Mohonk, New York. pp. 95-98.
... These roles can exist simultaneously and/or change continually, and it is not necessary to conceive of this continuum horizontally. (Lippe, 2002) Mr. Lippe goes on to hedge his bets, however, wisely suggesting a few sentences later that " we are probably only at the beginnings of a discussion of human/machine relationships, which I suspect will continue to develop over many years. " (Lippe, 2002) The definitions offered by Dr. Garnett and Mr. ...
... (Lippe, 2002) Mr. Lippe goes on to hedge his bets, however, wisely suggesting a few sentences later that " we are probably only at the beginnings of a discussion of human/machine relationships, which I suspect will continue to develop over many years. " (Lippe, 2002) The definitions offered by Dr. Garnett and Mr. ...
Article
Computers are becoming increasingly common in live musical performance, bringing a new set of considerations and challenges to the forefront. Nonetheless, computers also allow for a new kind of interactivity between performer and technology in the blossoming field of electroacoustic music. In this struggle between the compromises and the rewards of interactive composition, composers new to the genre are often left to repeat the struggles of others. Scholarship on this subject tends to be very specialized, focusing upon a particular type of interaction, or upon a certain technical approach or software application, and is therefore of little use in dealing with the large-scale issues of composing interactive electroacoustic music. This article is intended to serve as a starting point for discussion and consideration of some of the fundamental concepts and problems in interactive electroacoustic music, and to initiate a dialogue on methods and means of achieving interactive musical expression. Although a number of the ideas, methods, and practices described herein may appear simple and fundamental, I have discovered through experience in composing interactive music that it is useful to maintain these fundamental guides.
... Many computer music composers have written about the importance of building new humancomputer relationships that transcend simple ideas of control. David Rokeby distinguishes strongly between interaction and control; his view is summarised by Lippe [2002] thus: "if performers feel they are in control of (or are capable of controlling) an environment, then they cannot be truly interacting, since control is not interaction." Robert Rowe, in his seminal book Interactive Music Systems, writes about the importance of feedback loops between human and machine in which each influences the other [Rowe, 1992]. ...
Article
Full-text available
Machine learning is the capacity of a computational system to learn structures from datasets in order to make predictions on newly seen data. Such an approach offers a significant advantage in music scenarios in which musicians can teach the system to learn an idiosyncratic style, or can break the rules to explore the system's capacity in unexpected ways. In this chapter we draw on music, machine learning, and human-computer interaction to elucidate an understanding of machine learning algorithms as creative tools for music and the sonic arts. We motivate a new understanding of learning algorithms as human-computer interfaces. We show that, like other interfaces, learning algorithms can be characterised by the ways their affordances intersect with goals of human users. We also argue that the nature of interaction between users and algorithms impacts the usability and usefulness of those algorithms in profound ways. This human-centred view of machine learning motivates our concluding discussion of what it means to employ machine learning as a creative tool.
... Mesmo ao abordar obras em que a interação ocorre no âmbito do sinal sonoro, alguns autores tendem a enfatizar a existência de diferentes graus de interação que vão desde casos que visam a sincronia entre eventos instrumentais e eletroacústicos (reação) até casos mais complexos que atribuem processos de tomada de decisão ao computador, instituindo uma situação similar à de um computador-improvisador (Lippe, 2002;Dobrian, 2004). ...
Conference Paper
Full-text available
This article is related to a research funded by FAPEMIG. It discusses implementations (patches) in Max/MSP and Pd designed for musical improvisations involving real-time electroacoustic resources. The manipulation of sound samples in an interactive context plays a key role in the creation of the electroacoustic component in the improvisations. The article presents an overview of the implementations and their connection with work from previous research or work carried out by other researchers, followed by a presentation of possible future developments. The approach taken here involves aspects related to the gestural control of sonic parameters, patches for recording and playing-back sound samples, granulation of pre-recorded samples (granular sampling), sound diffusion and the exploration of a “musical memory” in a improvisation. See: http://www.ufjf.br/anais_eimas/files/2012/02/Manipula%C3%A7%C3%A3o-de-amostras-sonoras-em-contexto-interativo-Daniel-Barreiro.pdf
... Collaborative music making requires event participation in a musical context and it includes all aspects of human musical interaction -voluntary and involuntary actions [1,2]. Voluntary actions involve participants consciously forming decisions about musical activity, both in listening and playing modes. ...
... Whether the collaboration of a performance is human-with-human or human-withmachine, the common characteristic of this kind of activity is that it requires active interaction in the process. Thus, it includes all aspects of voluntary and involuntary actions towards music [1] [2] [3]. Improvising interactively turns musical interaction into continuous activity, exploring new formations of sounds and listening consciously. ...
Article
The author discusses his LiveImprovS~ and Call in the Dark Noise performances, investigating experimental improvisation as a performance practice with sonic and technological exploration. Through these performances,he introduces notions of control and interaction in solo and collective improvisation. Later in the article, the author describes the potential uses of technology that provide alternative possibilities for solo improvisation as well as alternative channels for participation in the act of improvising, widening the experience of shared control in collective improvisation.
Article
Departing from the artistic research project Goodbye Intuition (GI) hosted by the Norwegian Academy of Music in Oslo, this article discusses the aesthetics of improvising with machines. Playing with a system such as the one described in this article, with limited intelligence and no real cognitive skills, will obviously reveal the weaknesses of the system, but it will also convey part of the preconditions and aesthetic frameworks that the human improviser brings to the table. If we want the autonomous system to have the same kind of freedom we commonly value in human players’ improvisational practice, are we prepared to accept that it may develop in a direction that departs from our original aesthetical ambitions? The analyses is based on some of the documented interplay between the musicians in a group in workshops and laboratories. The question of what constitutes an ethical relationship in this kind of improvisation is briefly discussed. The aspect of embodiment emerges as a central obstacle in the development of musical improvisation with machines.
Chapter
The concept of interaction is foundational in technology interface design with its presuppositions being taken for granted. But the interaction metaphor has become ambiguous to the extent that its application to interface design contributes to misalignments between peoples' expected and actual experience with computerenhanced actions. This chapter re-examines the presuppositions governing humancomputer interaction with the motivation of strengthening weaknesses in their foundational concepts, and contributing a theoretical framework to designing for artistic as well as mundane experience. It argues for abandoning the interaction metaphor to refocus design discourse toward the intermediation and mediation roles of technology interfaces. Remediation (i.e., representation of one medium in another) is proposed as a conceptual model that more precisely describes the human-to-computer actions.
Article
Full-text available
O presente artigo discute a obra interativa Texturas Ephemeras com base em duas versões que revelam diferentes abordagens no tratamento do material sonoro e seu transcurso no tempo. A primeira versão foi realizada numa situação de instalação. A segunda ocorreu como uma peça de concerto. Em ambos os casos, com a utilização de um microfone, o público participante (interatores) moldou o conteúdo e a densidade sonora da obra. Apresentam-se aqui o conceito de interação e a concepção estética que nortearam a elaboração da obra, bem como os aspectos técnicos utilizados - com foco nas estratégias de modelagem em tempo real do material sonoro e sua espacialidade. Por fim, apresentamos algumas reflexões sobre os resultados alcançados.
Conference Paper
Full-text available
Abstract: This initial survey focuses my master's research in the Graduate Program in Music at Federal University of Minas Gerais-UFMG. Aims to verify the use of computer music and advanced network telecommunication technologies as a means for the development of interactive composition systems and distributed geographically. Besides strategies of interaction between different teams (artistic and technical) involved in the production process and performance interactive music distributed in network. Here, show significant contributions found in participation in important art events in the network, as the opening of the 33rd APAN (Asia-Pacific Advanced Network), the telematic opera Climate Change Opera and study of articles published about the Net-Concerts. Keywords: Distributed systems of musical composition; Strategies of musical Interaction at a distance; Collaborative work.
Article
While a considerable amount of literature and pedagogical repertoire already address the challenges of performance practice in works for instruments and fixed media, this body of knowledge largely excludes approaches to mixed works with interactive live electronics. Working with live electronics requires additional skills from the performer, and presents a different set of problems and solutions. As the repertoire grows, performers, composers and music educators must become acquainted with this emerging practice. In this article, the author introduces the concept of models of interaction in order to examine several works for piano and live electronics from the performance practice perspective. The text is illustrated with examples from classic and recent repertoire.
Conference Paper
Full-text available
While the use of the fast Fourier transform (FFT) for signal processing in music applications has been widespread, applications in real-time systems for dynamic spectral transformation have been quite limited. The limitations have been largely due to the amount of computation required for the operations. With faster machines, and with suitable implementation for frequency-domain processing, real-time dynamic control of high-quality spectral processing can be accomplished with great efficiency and a simple approach. This paper describe some previous work in dynamic real-time control of frequency-domain-based signal processing. Since the implementation of the FFT/IFFT is central to the approach and methods discussed, the authors provide a description of this implementation, as well as of the development environment used in their work
Interactive Music SystemsScore Following and Real-time Signal Processing Strategies in Open-Form Compositions
  • R Rowe
  • B Moon
Rowe, R., 1993. “Interactive Music Systems” The MIT Press, Cambridge, Massachusetts, pp. 1. Moon, B., 1997, “Score Following and Real-time Signal Processing Strategies in Open-Form Compositions”, Information Processing Society of Japan SIG Notes, Vol. 97, No. 122, pp. 12-19
Score Following and Real-time Signal Processing Strategies in Open-Form Compositions
  • B Moon
Moon, B., 1997, " Score Following and Real-time Signal Processing Strategies in Open-Form Compositions ", Information Processing Society of Japan SIG Notes, Vol. 97, No. 122, pp. 12-19.
Public lecture and private discussions
  • D Rokeby
Rokeby, D., 1997. Public lecture and private discussions.