ArticlePDF Available

Creative improvisation with a reflexive musical bot

Authors:

Abstract

This paper discusses improvisatory musical interactions between a musician and a machine. The focus is on duet performances, in which a human pianist and the Controlling Interactive Music (CIM) software system both perform on mechanized pianos. It also discusses improvisatory behaviours, using reflexive strategies in machines, and describes interfaces for musical communication and control between human and machine performers. Results are derived from trials with six expert improvising musicians using CIM. Analysis reveals that creative partnerships are fostered by several factors. The reflexive generative system provides aesthetic cohesion by ensuring that generated material has a direct relationship to that played by the musician. The interaction design relies on musical communication through performance as the primary mechanism for feedback and control. It can be shown that his approach to musical human-machine improvisation allows technical concerns to fall away from the musician's awareness and attention to shift to the musical dialogue within the duet.
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
Creative Improvisation with a Reflexive Musical Bot
Andrew R. Brown
Interactive Media Lab, Griffith University, Brisbane, Australia.
andrew.r.brown@griffith.edu.au
Andrew R. Brown is Professor of Digital Arts at Griffith University in Brisbane,
Australia. He is an active computer musician and computational artist. His research
interests include digital creativity, computational aesthetics, musical intelligence, and
the philosophy of technology. He pursues creative practices in computer-assisted music
performance and audio-visual installations, with a focus on generative processes and
interactions with live algorithms. For more information visit
http://andrewrbrown.net.au.
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
Creative Improvisation with a Reflexive Musical Bot
This paper discusses improvisatory musical interactions between a musician and
a machine. The focus is on duet performances, in which a human pianist and the
Controlling Interactive Music (CIM) software system both perform on
mechanised pianos. It also discusses improvisatory behaviours, using reflexive
strategies in machines, and describes interfaces for musical communication and
control between human and machine performers. Results are derived from trials
with six expert improvising musicians using CIM. Analysis reveals that creative
partnerships are fostered by several factors. The reflexive generative system
provides aesthetic cohesion by ensuring that generated material has a direct
relationship to that played by the musician. The interaction design relies on
musical communication through performance as the primary mechanism for
feedback and control. It can be shown that his approach to musical human-
machine improvisation allows technical concerns to fall away from the
musician’s awareness and attention to shift to the musical dialogue within the
duet.
Keywords: music, performance, improvisation, interaction, generative, computer
Introduction
Improvising music machines are a type of musical interface, at the ‘player’ end of
Robert Rowe’s ‘player-paradigm—instrument-paradigm’ classification of interactive
music systems (Rowe 1993). Augmented acoustic instruments lie at the opposite end of
this spectrum. A key point of difference is in the mechanisms of control: the instrument-
paradigm typically favours direct physical control, while the player-paradigm seeks to
model aspects of human communication (albeit simplistically), and aims to influence
through negotiation rather than to control directly. ‘Player paradigm systems’, Rowe
contends, ‘present the machine as an interlocutor—another musical presence in the
texture that has weight and independence distinguishing it from its human counterpart’
(Rowe 1993, 302). In practice, most improvising music machines lie somewhere in the
middle of Rowe’s spectrum, combining aspects of direct control with models of more
abstract communication.
This article describes an improvising music machine called CIM (an acronym for
Controlling Interactive Music), a system developed over the last few years and designed
to lie towards the ‘player’ end of Rowe’s player-instrument spectrum. It is a semi-
autonomous interactive music system with some elements of direct control. Its design
prioritises the achievement of duet improvisation capability using a minimal number
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
and type of musical parameters for direct control. For a better idea of the system, an
example of a performance with CIM from NIME 2016 is available for viewing online
1
.
CIM draws on a rich tradition of systems for human-computer music improvisation. The
history includes improvising software that draws on the Jazz tradition, such as GenJam
(Biles 1994), which generated jazz solos using genetic algorithms, and Voyager (Lewis
2000), which used rule-based generative processes. From the outset, systems like these
focused on ‘listening’ to a human performer and ‘composing’ music responses in real
time. Later, Blackwell and Young (2005) used the term ‘live algorithms’ to describe
similar improvisatory computer software that utilises ‘reflex systems’ to accompany
human performance. Going beyond computational generation to real-time interaction,
they suggest ‘the challenge is to achieve equivalence between human and computer
collaborators’ (Young and Blackwell 2013: 507) during performance. François Pachet
(2006) directly explored reflexive techniques around the same time, and Assayag and
Dubnov (2004) employed processes of micro segmentation and recombination using
factor oracles. In more recent times, Fiebrink (2011) and others have applied machine
learning techniques to real-time interactive music systems. More detailed surveys of
improvising music systems can be found in Rowe (1993), Collins (2006) and Dean and
McLean (2018).
Performance as the interface
In his article ‘Sound is the Interface’, Agostino Di Scipio (2003) explores the design of
sonic art works, where a generative electronic music system ‘listens’ to the sound of the
environment and adjusts its performance as a result. A human being might also interact
with the system, either by making sounds (influencing the environment) or by adjusting
system parameters. Di Scipio describes such systems using ecological metaphors—for
example, they constitute ‘a dynamical system exhibiting an adaptive behaviour to the
surrounding external conditions, and capable to interfere with the external conditions
themselves’ (Di Scipio 2003: 271). Interactions with CIM are similarly structured to
operate within the dynamic system that is duet music improvisation. However, instead
of listening to and performing sound, as Di Scipio describes, CIM listens to and
generates MIDI data. In doing so, CIM is responding to the performance of the
musician while simultaneously performing its own output. Typically, CIM
performances involve two digitally enabled acoustic pianos (Yamaha Disklaviers),
which, in this case, turn the performance gestures (live and MIDI based) into sound.
Extending this use of performance data as a method of interaction, the musician’s piano
pedals are used to control aspects of CIM’s behaviour; this will be explained in more
detail below. Extending Di Scipio’s turn of phrase, this method of musician-machine
interaction will be termed performance as interface. Therefore, while the computer is
doing symbolic analysis and generating MIDI data the human performer is processing
sound and playing via physical gestures.
Improvisation
In their introduction to the volume Creativity and Cultural Improvisation, Tim Ingold
and Elizabeth Hallam highlight three aspects of improvisatory activity or, as they put it
1
https://youtu.be/Et7nGLvGt-A
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
more simply, ‘the way we work’. Improvisation is generative, it is relational, and it is
temporal (Ingold and Hallam 2007, 1). Accordingly, the CIM software creates a
relational connection with a musician by using the material played by the human
performer as the basis for its generated performance. This use of imitation and
interpretation forms the basis for describing the system as reflexive. Reflexive
approaches will be discussed in more detail later. For now, it is sufficient to agree with
Ingold and Hallam when they argue that imitation is not simple repetition ‘but entails a
complex and ongoing alignment of observation of the model with action in the world’
(Ingold and Hallam 2007, 5). And even though computational recording and playback
might be more simple-minded than human imitation, experiences with designing CIM
reveal that even though playback itself is straightforward, the decisions about what and
when to play are certainly not. They require adherence to cultural expectations, such as
the maintenance of stylistic conventions and the tracking of the dynamic development
of the human musician’s performance.
Improvisational relationality is inherent in human-human musical duets and, in the
broader social context, it involves a process by which people ‘continually participate in
each other’s coming-into-being(Ingold and Hallam 2007, 6). Human-technology
relations have a different ontological status but, nevertheless, can also be engaging
(Feenberg 2010). One of the design goals for CIM was to provide a ‘fluid’ interaction
between performers. Discussions with performers and audience members have revealed
that the quality of human-technology interactions is significant in any judgement of the
success of the performance (Brown, Gifford and Voltz 2013). This finding echoes
Sherry Turkle’s (1984) insight that interactions with computing systems (especially
games in her studies) give rise to a sense of the machine as a ‘second self’—a metaphor
particularly apt given CIM’s deliberate use of reflexive techniques. Commentary by
musicians on their performances with CIM, discussed later in this article, will focus
further attention on the character of the relational aspects of improvising with a
reflexive system.
Almost by definition, improvisation occurs in real-time, although experience and
preparation (and pre-coded processes) are also influential (Larson 2005). One of the
main challenges of improvisational music software is that the system must be
‘committed’ to its decisions. 'Although machine improvisation and computer-assisted
composition share many of the same techniques of analysis and generation, real-time
systems lack the capacity for revision, back-tracking or human selection. Computation
must also be timely, and so real-time software architectures are designed for efficiency.
These challenges are no less than those faced by human performers, and part of the
interest in developing CIM was how what has been learned about human musical
cognition might inspire computational approaches to music making (Brown and Gifford
2010; Brown, Gifford and Davidson 2015).
Time marches on for culture as well as for individuals. The CIM system includes some
cultural ‘knowledge’, such as regularity of tempo and constraints around the density of
material to perform, but it does not contain a database of learned phrases or structures.
So as not to become outdated, CIM’s rules are informed by studies of music perception
(Narmour 1992; Temperley 2001; Brown, Gifford and Davidson 2015), and include
music theory concepts that are well established across Western musical genres; at this
stage, however, CIM does not learn or adapt its behaviour. In one sense, therefore, CIM
is stuck in time, due to its inability to learn; in a more positive sense, it approaches each
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
performance with a fresh slate and each performer with an invitation to experiment.
Using heuristics, it is guided by the past but not determined by past material. It is hoped
that in a practical sense this allows for a variety of aesthetic outcomes as individual
performers and performances can take their own direction. And, in a more historical
context, CIM provides a step forward in the relatively new cultural practice of
performance with computationally creative machines.
Reflexive Systems
A reflexive system is defined here, after Pachet (2006), as a process that feeds
information from users, and back to them. Examples might be as simple as using a
mirror, or as complex as compiling a DNA profile. In music, we are using the term to
mean a process by which performance data is captured and replayed, not unlike a loop
pedal, but also transformed in subtle ways that reinterpret or distort the data.
The power of mimetic processes in human psychology has a long history, documented
in the ancient writings of Plato and Aristotle. We see it in everyday encounters, like the
fun of visiting the hall of (distorted) mirrors in an amusement park (see Figure 1).
[insert figure 1 here]
Figure 1. Distorted mirrors in an amusement park.
In music, mimetic processes are widespread. They occur in the use of repetitions,
variations, canons, motifs and other compositional structures. Because simplistic
repetition of material, or mimesis more broadly, is open to criticism as musically trivial
or culturally stultifying, reflexive systems typically provide transformations of various
sorts to add interest and stimulation. Gary Peters, when discussing mimesis and the
philosophy of improvisation, comments that mimetic acts indicate ‘a process of decay
that demands of the artist the necessary transformation to renew, revivify, and re-
empower the mimetic faculty through the discovery of new or forgotten strategies’
(Peters 2009, 102). CIM designers have put forward the related argument that a
reflexive approach to interactive music systems involves elements of both
transformative and generative methods that have been traditionally seen as disparate
(Gifford and Brown 2011).
In computer systems, the use of the mirror as metaphor is also well established. Turkle
leaned heavily on the metaphor of computers as reflections of self, and noted that such
reflective technologies ‘allow us to see ourselves from the outside, and to objectify
aspects of ourselves we had perceived only from within’ (Turkle 1984, 155). Reflexive
systems are generative by being transformative, in that they produce ‘output’ that is
different from ‘input’ and which appears as a novel interpretation; they maintain the
benefits of combinatorial systems (Assayag & Dubnov 2004; Cope 1996), however, in
maintaining the inherent musicality and stylistic signatures of human performance.
The term Interactive Reflexive Music Systems (IRMS) was formally established by
Pachet (2006). These systems use the accumulated input from the user and typically do
not include external material stored in a database. They are based on feeding back that
input to influence the user's subsequent actions but, unlike a typical feedback circuit,
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
reflexive systems provide selected feedback in a form that may be altered. These
alterations can provide new or not previously apparent insights into the original,
allowing for evolving interaction and the development of ideas and materials. Pachet
employed IRMS systems such as The Continuator across a range of performance
contexts and, with collaborators, explored their potential in educational settings with
young children (Addessi and Pachet 2004). He provided three criteria for an IRMS
(Addessi and Pachet 2004, 361):
It must produce an impression of similarity
It must conform incrementally to the personality of the user
It must be intimately controllable.
CIM attempts to adhere to all three. Further, CIM is designed not merely to follow but
to exert its own identity in the musical duet. In the spirit of trying to achieve some
‘equivalence’ between musician and machine, and despite primarily following the lead
of the human musician, CIM is also designed to appear to act somewhat independently,
to provide a sense of musical agency, and become an improvising partner for the human
musician.
Improvisational Agency in Machines
A somewhat naïve understanding of improvisation, as the term is used generally, is to
describe it as ‘making things up’ or ‘making do’, a kind of bricolage or non-systematic
tinkering. If this ‘anything goes’ approach were true, then a music improvising
computer system might be relatively easy to construct. However, the literature on
musical improvisation suggests that the task is much more sophisticated and nuanced,
and relies more on listening than playing. For example, David Borgo in his book
subtitled Improvising music in a complex age writes: ‘Improvised music hinges on
one’s ability to synchronize intention and action and to maintain a keen awareness of,
sensitivity to, and connection with the evolving group dynamics and experiences’
(Borgo 2005, 9).
Borgo’s description seems to set a high benchmark for embodied awareness and
understanding as a basis for improvisation. Andy Clark’s views on embodied cognition
would further suggest that the simplistic manipulation of recorded human performances,
such as those in reflexive systems, is unlikely to meet any reasonable threshold of
intelligence or creativity. ‘Intelligence and understanding’, he suggests, ‘are rooted not
in the presence and manipulation of explicit, language-like data structures, but in
something more earthy: the tuning of basic responses to a real world that enables an
embodied organism to sense, act, and survive’ (Clark 1997, 4).
It is true that most improvising software systems, including CIM, struggle to reach this
mark. This is not surprising, given Rowe’s comment that ‘Interactive improvisation
poses perhaps the greatest challenge for machine musicianship’ (Rowe 2001, 277).
However, interactive music systems such as CIM do more than the data manipulation
Clark refers to; they are interactive. They can sense the actions of the human performer,
if only in a limited way, and act in response to these conditions through their playback.
This interactive behaviour provides an important cue to performers and audiences alike:
that the system possesses some musical agency, or what Florent Berthaut and David
Coyle refer to as ‘liveness(Clark 1997). Nevertheless, they still seem to lack the
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
perspective and intention required to make aesthetic decisions, let alone have concern
for their own survival, as Clark would demand.
There is some support for CIM’s reflexive use of interpretation and generation in Marc
Leman’s discussion of mediation technology in musical performance. He emphasises
the important cognitive role of imitation in musical interaction, and provides a rich
discussion about the ways mimesis activates musical appreciation. Leman also points to
the mirroring of performance gestures that provide a sense of intentionality in the
reproducing agent. He also states that, for ‘artificial musicians’, the ‘artistic result
emerges from a trajectory of constrained interactions’ even when ‘interaction may
include randomness, but also imitation, and adaptation’ (Leman 2008, 174).
The ability of interactive music systems to impart a sense of intentionality underpins
their musical agency. Even if the intentionality is only apparent, it can become real in
the minds of the audience and the duet musician. When it works well, the performance
dialogue between human being and machine provides a sense of integration in which
the agency of the duo emerges from that of each performer. This interactive perspective
is consistent with Lambros Malafouris’ definition: ‘agency is a temporal and an
interactively emergent property of activity not an innate and fixed attribute of the
human condition’ (Malafouris 2008, 35).
The capacity for reflexive music systems to demonstrate creative agency is clear––even
if limited and requiring human interaction––but full autonomy has rarely been the point.
Rather, systems like CIM are intended as collaborative partners; they act in a network of
relationships (Brown 2016) that, importantly, includes a human musician, and from
which agency and creativity emerge, through interaction. Because of this, in the design
of CIM, and in this article, attention is focused most sharply on the contribution that
improvisational agents can make to collaborative creative outcomes, evaluated here
through the lens of performer experiences.
A Description of CIM
CIM can be described as an instance of what George Lewis calls a creative machine,
and defines as ‘devices that operate in dialogue with and contribute to real-time, real-
world musical utterance’ (Lewis 2009, 457). CIM is a reflexive interactive music
system, a music improvisor that listens to and controls a virtual or physical instrument
via MIDI. When possible in performances, both human being and machine play a
computer-controlled piano, such as the Yamaha Disklavier. In combination with this
instrument, CIM is perhaps a ‘robot’ rather than a software ‘bot’, with all the inherent
complexity of its material constraints and affordances that such an embodiment implies.
Refining the CIM software to work with the Disklavier requires adjustments, to account
for material constraints of the instrument.
CIM is designed to operate in a Western musical context, with an orientation towards
diatonic harmony and metrical rhythm. The software records MIDI data from the
performer’s instrument, and uses this material as the basis for its own playback, which
is then performed simultaneously with the human performance, to create a duet. CIM
segments the captured performance data at boundary points of musical stability
(Narmour 1992, Brown & Gifford 2013). Segments are selected probabilistically for
playback in real-time and transformed before being played. Segment selection involves
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
a random walk through the database of recorded material, using a Gaussian probability
distribution centred on the current segment, thus creating a tendency for segment
repetition. The rate of playback is varied to produce an inverse match for the human
density level––that is, it plays more when the human performer plays less, and vice
versa.
Transformation Processes
Given the reflexive nature of CIM, a significant element of its operation is how it
generates material based on data captured from the human musician. One of the reasons
for using the reflexive approach is to preserve the expressivity of the human performer
by minimising the treatment of the data. There is an assumption that the musical
expressivity in the data can be exploited if it is treated with care. Coincidently, Peters
reinforces this approach as fundamental to improvisation practices that exploit the
possibilities and constraints of musical material. ‘It is not a question of how much
material the improviser has available but in what ways all material contains, sedimented
within it, historical patterns of human engagement and creativity that impose limits on
what can and cannot be done on the occasion of the material’s subsequent reworking’
(Peters 2009, 11). CIM employs transformational processes that are designed to
leverage the ‘sedimented’ expressivity in the captured human performance, through
subtle processes of segmentation, transposition, inversion, pitch contour adjustment,
quantisation, and dynamic variation. Segment transformation is probabilistically applied
using a normal distribution across multiple parameters. The range of variation in any
parameter is limited, so as to maintain stylistic coherence with the original material. In
this approach there is a tension between the limiting of transformations and the
introduction of novelty––a tension frequently recognised as important for creativity
(Boden 2010). Feedback from experimentation with CIM over several years has led to
the decision to minimise the risk of ‘inappropriate’ material at the expense of increased
novelty. Perhaps, in future, more sophisticated transformational algorithms might allow
for better machine assessment of ‘appropriate’ transformations. For the interested
reader, source code for CIM is available online
(https://github.com/algomusic/CIM_in_extempore).
Performance as Interface
As CIM analyses the captured material, it looks for features in the human performance
including: recent harmonic context, textual density, pitch range, and dynamic level. It
uses these to condition its performance, always in keeping with that of the human
musician. The development of CIM has included an exploration of various performer
controls and methods of feedback. More controls and feedback were included in earlier
versions of CIM but the cognitive load was found to be distracting for the musician. As
far as possible, the current design seeks to utilise changes in music and performance as
the method of ‘communication’ between human being and machine.
Pedals as Controllers
In the current version of CIM the standard piano pedals on the musician’s instrument
are used as controllers to provide the human performer with some direct influence on
CIM. This reinforces the balance of influence in favour of the human, which reflects the
difference in musical competence. The right (sustain) pedal is used to control sustain on
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
both pianos in parallel. The left (soft) pedal sends a ‘flush’ command to CIM to empty
its database of recorded material and re-set other parameters. Holding down this pedal
keeps the recording buffer empty. In the future, other, more subtle, forms of ‘forgetting’
could be explored as an alternative to ‘flushing’ in order to rebalance musical agency
towards the machine. Holding down the middle (sostenuto) pedal prevents CIM from
recording any new data into the buffer but leaves current material intact.
This approach is designed to minimise the musician’s cognitive load in musical
performance, a topic well covered by Jeff Pressing (1988) in the context of improvised
performance, and by Tim Sayer (2016) in his discussion of Live Coding as a real-time
improvisational practice. Although the demands of control in CIM are not as extensive
as those in Live Coding, experiments during the development of CIM have revealed that
the addition of even a few controls to the already demanding task of pianistic
improvisation provides challenges for the performer. The modification of the three
existing piano pedals to control aspects of CIM seems to be at the comfortable limit of
manageability for the pianists in this study.
Performer Feedback
There have been several previous experiments and performances with different versions
of CIM (Brown, Gifford and Voltz 2013a). The results reported here are based on recent
trials in which six expert improvising musicians used the latest version of CIM. Four of
the six had some previous experience with CIM––in two cases quite substantial––and
the remaining two had none. The musicians explored the system by rehearsing with it
for several hours, with guidance from researchers. Then each performed a series of
improvisations in a studio recording setting. During these activities, semi-structured
interviews were held to ascertain the performers’ impressions, experiences, comparisons
with other improvising situations, and so on. The sessions were video recorded,
researchers took observational notes, and a transcript of discussions was produced. The
performance setup involved two Disklavier pianos both connected to a computer via
MIDI over USB; one piano was played by the human performer, the other by CIM, as
shown in Figure 2. Videos from these sessions are available online for review
2
.
[insert figure 2 here]
Figure 2. A performer at one piano; the second piano is played by CIM. Both pianos are
connected to the computer running CIM.
A thematic analysis (Braun and Clarke 2006) of the interview data was undertaken to
ascertain the musicians’ experiences of improvising with CIM. The analysis focused on
their perceptions of CIM’s projected musicality, or musical agency, or lack thereof.
Summaries of the performers’ comments are below, categorised by the themes that
arose: experience, interaction, musicality, and controllers. The themes derive from the
research concerns of the project but are limited to those issues most strongly present in
2
https://youtu.be/HMQ1nw0owUo; https://youtu.be/PRrv0SS7pcI,
https://youtu.be/FqrB741z_GE; https://youtu.be/lR3T_bYKGSY.
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
the interview data. In summary, the musicians reported that, despite the direct controls
being quite Spartan, the performers, without exception, were able to achieve a wide
range of results with sufficient musical coherence and interest, albeit after several hours
of engagement with the system. The reflexive approach adopted by the system appeared
to achieve the goal of imparting a sense of musical agency, while affording a broad
aesthetic range for the improvisation as a whole. There were limitations, however, in the
medium and longer-term temporal coherence of CIM’s choices of material and means
of transformation. Following are overviews and extracts from the performer interviews,
organised by the identified themes.
Performers reported that playing with CIM was an authentic and enjoyable
improvisational experience. In the words of one performer, “the challenge is trying to
spontaneously react to something that’s coming back at you, that you’ve set off. I think
that’s really valuable. And that’s what makes it fun, too” (P2). Another performer
echoed the sentiment, “This was really fun” (P1). The reflexive process and interactive
settings seemed to be effective in providing a sense of musical agency from CIM. This
might have been helped by the attitude with which performers approached the system:
“I’m kind of treating it like a person, and I’m listening and trying to go with what it’s
doing. At the same time I know that if I play things it will respond to that. But that’s the
same as a person” (P3). It seems, at least for this performer, the experience of agency
was more than a result of projection: “It feels like there’s another person in the group”
(P3).
They reflected on what the interaction with CIM was like, and which approach they
needed to adopt. The novelty of CIM's generation, despite it being reflexive, required a
typically open mind set: “Having your mind open enough that you can hear it and
respond to it. Instead of thinking ‘I’m doing this now, and I’m not listening’ ” (P1). In
some sense, they saw the challenge of the unexpected as a positive: “I don’t mind that
it’s subverting my intentions” (P1), and “You can certainly set the tone by what material
you start with. But I can see with the work today that you can’t predict it(P4). Some
performers found it helpful to be accepting of the directions CIM might take, and
confident that they could be fruitful: ‘I was a little more patient in those last two
[performances] and gave the algorithm more chance to develop – it was nice to
experience that. It was creating its own energy and ambience. And mood, in a way”
(P4).
Because CIM had algorithms for managing musical elements as an interface, it was
useful to hear performers’ comments on how effective those elements were managed.
Mostly performers were satisfied with CIM's harmonic following, where playback was
pitch quantised to match the performers’ current harmonic context. “Note-wise, it’s
played some really cool stuff sometimes! It’s like: ‘How’d you come up with that?
Nice!’ And then sometimes it’ll play something that’s a bit lame. And then you have to
try and play something against it that will make it sound better. But a lot of the time it
seems to make choices that are complementary” (P2). Opinions were somewhat less
positive with regard to CIM’s rhythmic choices, which leave the captured material
largely unaltered. “Obviously you can really hear that I’m generating the harmonic
material. But there’s unexpected things with what’s coming out, rhythmically” (P4).
The temporal horizon for CIM is quite limited and larger scale structural form is
predominantly in the hands of the human performer. Comments often focused on the
need to introduce material carefully, and use the ‘flush’ function as a reset for a new
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
section. “There’s a bigger range of scenarios or possible structures that could emerge in
a live improvised duo. But we can get some of the feel of that here – that organic
development of an idea – but the format is a little bit predictable” (P4).
The use of the piano pedals as controls for CIM was a significant interaction design
development in the project; performer feedback on that point was therefore interesting
to hear. Getting used to controlling the sustain for both pianos took very little time: “I
think you just have to pedal your own piano, then you recognise if the other isn’t
working. You either take your foot off [the sustain] or flush it” (P4). Many performers
appreciated being able to mute CIM’s listening so they could play contrasting material
they did not want it to mimic. “The nice thing about the pedals we have now is that you
can go either way. You can have the harmony going together or divert” (P4). The flush
pedal was used frequently to take the improvisation in a new direction. “They [sections
of the performance] were in the one key for the most part, and then if it was in a
different key I would just reset [flush]” (P2). Performers also acknowledged that the
pedals were well complemented by other performance as interface features. “And
there’s a lot of things you can control anyway – articulations, dynamics – all those kinds
of things that you can control that you don’t need a pedal for” (P1).
Other methods of evaluation could be undertaken for these studies. For example, the
MIDI data from performances could be recorded and analysed for patterns of interaction
or adherence to the underlying models of reflexive response. This is an opportunity for
future research; for this project, to date, it was felt there was little to be gained for the
significant effort such analysis would require. There seems to be limited understanding
of how to interpret performance (MIDI) data as they relate to improvisational
effectiveness, so it is not clear where such an analysis would begin or end. Another
method could be to seek external expert commentary on the performances. This
approach was previously employed by the author (Brown, Gifford & Davidson 2015)
and in the future could be applied to the recorded documentation. This method,
however, is best suited to addressing the question of observed musical partnership,
rather than the situated experience of improvising with a reflexive system, which is the
focus here.
Discussion
When discussing human-machine improvising duets, it is tempting to maintain a focus
on the interaction between them. Although this is an important aspect of the design
discussion, our experiences show that, when it works well during performance, human-
machine interaction falls away from awareness and attention shifts to the music that
results. “If I compare that [playing with CIM] to an improvised situation, a freely
improvised situation with a duo or trio, it’s almost the same thing” (P3). These
experiences are in keeping with existing theories on musical and technical interactions.
In his book, The Philosophy of Improvisation, Peters urges readers to resist reducing the
improvisational collaboration to a dialogue between performers, and focus on
engagement with the music. ‘In short, the fundamental relationship is here understood
to be between the improvisor and improvisation, not between improvisor and
improvisor’ (Peters 2009, 3). He suggests the focus be on the ‘situatedness of the
improvised in a work, the contingency of that work, and the agility necessary to avoid
becoming trapped in the communicative community created by it’ (Peters 2009, 3).
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
These perspectives resonate with the comments and observations made by the
performers improvising with CIM.
Free improvisation of the sort typically done with the reflexive music system––even
though this was not what Peters had in mind––is, he suggests, no longer a ‘radically
autonomous art’ as free improvisation perhaps once was some decades ago. Rather it is
‘a predicament within which the artist performer is saddled with the “tragic” task of
preserving the beginning of art without destroying the freedom of this origin through the
creation of an artwork conceived of as an end’ (Peters 2009, 3). The maintenance of an
open-ended approach seemed to be particularly pertinent in performances with CIM. It
was made more so because of the ‘amplified’ contingency that CIM’s reflexive and
probabilistic processes contain, and the perception of agency it imparts, which arises
from its balance of uncanny similarity yet unpredictability. “I’m kind of treating it like a
person, and I’m listening and trying to go with what it’s doing. At the same time I know
that if I play things it will respond to that” (P3). Response, however, was localised in
time. Any plans a performer had about how a work would unfold were not, and could
not be, shared by CIM, and therefore performers needed continually to revise their
response to its sometimes-unexpected behaviours.
The performers’ initial responses to the reflexive nature of CIM were of delight and
intrigue, similar to those reported by Addessi and Pachet (2004) and to the typical
reactions to distorting mirrors or photo booths in theme parks. Beyond these initial
responses, performers also commented on the uncanny sense of playing in duet with
oneself, or that the systems somehow reflected inner feelings of which the performers
were not necessarily conscious. The unpredictability of output was a double-edged
sword: generated output was described as a mixture of pleasant surprise and unhelpful
contribution. This type of response echoes the somewhat chaotic aspects of music
improvisation explored by Borgo, who describes these unpredictable reflections from
improvising partners as a ‘turbulent mirror’ (Borgo 2005, 85). Performers
acknowledged the value of having to react spontaneously, even as CIM subverted their
intentions at times. “There’s the challenge of trying to spontaneously react to something
that’s coming back at you that you’ve set off. I think that’s really valuable. And that’s
what makes it fun, too” (P2). Indeed, this stimulation through unpredictability aligns
with contemporary notions of ‘improvisation in music as the process of creation (as
opposed to performative re-creation) and presentation of sonic events are simultaneous’
(Dean 2009, 134). However, as noted elsewhere, one challenge of the short-term nature
of the reflexive process is to maintain longer-term interest and development. For the
performers using CIM, a common approach to managing larger-scale form was to
conceive of the performance in sections, and to use the ‘flush’ function, activated by
one of the piano pedals, to clear CIM’s memory and set a new direction for the work.
The CIM system implements some quite direct control protocols, mediated through
performance gestures and musical feedback – in particular, the influence of harmonic
selection, range, dynamic level, and polyphony on CIM’s generated input, and the direct
control using remapping of the three standard piano pedals. This is an interaction design
approach, discussed here as ‘performance as interface’. The objective is to keep, as
much as possible, the performer’s focus on the music, and to use the music performed
by both human being and machine as the vehicle for communication between them. The
additional functions of the existing piano pedals were designed to provide maximum
impact, with minimal additional cognitive load. The use of existing interface controls
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
(piano pedals) also provides minimum distraction for audience members and heightens
the sense of duet partnership as an emergence of ensemble agency.
Conclusion
In the reflexive design of CIM, the performer determines the decisions about the
musical material available to play. However, decisions about algorithm design for what
and when to play are not at all straightforward. They require adherence to cultural
expectations and the dynamic development of the human musician’s performance. The
use of performative control mechanisms for managing musical parameters such as
dynamics, rhythm and tonality appear to contribute to this sense of authentic duet
improvisation, and to the sense of ‘fluid’ (although not seamless) interaction, where the
awareness of interaction ‘control’ fades into the background, privileging the duet
interplay and moving the music itself into the foreground.
There are many areas for even further development of CIM, especially in terms of
further enhancement of its rhythmic variability and analytical capabilities, given the
emphasis in improvisation literature on the importance of listening. This could include
the addition of beat tracking (Collins 2006), enhanced harmonic context estimation
(Krumhansl and Kessler 1982; Lerdahl and Krumhansl 2007), performer sentiment
estimation (Ben-Asher and Leider 2013), learning from experience (Xia and
Dannenberg 2015; Fiebrink 2016), and the development of a sense of large scale
structural organisation (Eigenfeldt et al. 2016).
Of course, there are stories of improvisation with the CIM system necessarily left untold
in this paper. One is about the evolutionary development of the system itself, and how
the design team has improvised its way through the development of interpretive models,
strategies for interaction, and the mechanics of multiple implementations to get to the
point CIM has currently reached. These interesting tales are forthcoming.
The focus here has been on the experiences of musicians who have performed with the
CIM system as it now stands and, especially, on the system’s strong commitment to
reflexivity for music generation and to performance as interface. CIM might not yet
meet the standard of a ‘complex and ongoing alignment’ for improvisational action with
the world, set by Ingold and Hallem (2007, 5) but the experiences of expert performers,
to date, indicate that a reflexive approach can be musically satisfying and creatively
stimulating.
Acknowledgments
The author would like to acknowledge the contributions of Toby Gifford, Andrew
Sorensen, and Bradley Voltz to the design and development of various versions of the
CIM system, and to the many musicians who have performed with CIM and
participated in experimental trials. This research was supported by the Australian
Government, through the Australian Research Council's Discovery Projects funding
scheme (project DP120101829).
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
References
Addessi, Anna Rita, and François Pachet. 2004. Child/Computer Interaction:
Observation in Classroom Setting’ in Proceedings of the Conference on
Interdisciplinary Musicology, edited by Richard Parncutt and F. Zimmer. Graz,
Australia: The University of Gratz.
Assayag, Gérard, and Shlomo Dubnov. 2004. ‘Using Factor Oracles for Machine
Improvisation’ in Soft Computing-A Fusion of Foundations, Methodologies and
Applications 8 (9): 604–610.
Ben-Asher, Matan, and Colby N. Leider. 2013. ‘Toward an Emotionally Intelligent
Piano: Real-Time Emotion Detection and Performer Feedback via Kinesthetic
Sensing in Piano Performance’ in Proceeding of New Interfaces for Musical
Expression, 21–24. Kaist, Korea.
Berthaut, Florent, and David Coyle. 2015. ‘Liveness Through the Lens of Agency and
Causality’ in Proceedings of the International Conference on New Interfaces for
Musical Expression, 382–86. Baton Rouge, Louisiana.
Biles, John A. 1994. ‘GenJam: A Genetic Algorithm for Generating Jazz Solos’,
International Computer Music Conference, 131–37. San Francisco: ICMA.
Boden, Margaret A. 2010. Creativity and Art: Three Roads to Surprise. Oxford, UK:
Oxford University Press.
Borgo, David. 2005. Sync or Swarm: Improvising Music in a Complex Age. New York:
Continuum.
Braun, Virginia, and Victoria Clarke. 2006. ‘Using Thematic Analysis in Psychology’,
Qualitative Research in Psychology 3 (2): 77–101.
Brown, Andrew R. 2016. ‘Understanding Musical Practices as Agency Networks’ in
Proceedings of the International Conference on Computational Creativity. Paris:
Association of Computational Creativity.
Brown, Andrew R., and Toby Gifford. 2010. ‘Interrogating Statistical Models of Music
Perception’, International Conference on Music Perception and Cognition
(ICMPC 11), 715–17. Seattle, Washington.
Brown, Andrew R., and Toby Gifford. 2013. ‘Real-Time Segmentation Cues and the
Extended Now’ in Proceedings of the International Conference on
Computational Creativity. Sydney.
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
Brown, Andrew R., Toby Gifford, and Robert Davidson. 2015. ‘Techniques for
Generative Melodies Inspired by Music Cognition’, Computer Music Journal 39
(1): 11–26.
Brown, Andrew R., Toby Gifford, and Bradley Voltz. 2013a. ‘Controlling Interactive
Music Performance (CIM)’ in Proceedings of the Fourth International
Conference on Computational Creativity, edited by Mary Lou Maher, Tony
Veale, Rob Saunders, and Oliver Bown, 221. Sydney: The Association for
Computational Creativity.
Brown, Andrew R., Toby Gifford, and Bradley Voltz. 2013. ‘Factors Affecting
Audience Perceptions of Agency in Human-Computer Musical Partnerships’ in
Proceedings of Creativity and Cognition 2013, 296–99. Sydney: UTS.
Brown, Andrew R., Toby Gifford, and Robert Davidson. 2015. ‘Techniques for
Generative Melodies Inspired by Music Cognition’, Computer Music Journal 39
(1): 11–26.
Clark, Andy. 1997. Being There: Putting Brain, Body, and World Together Again.
Cambridge, MA: The MIT Press.
Collins, Nick. 2006. ‘Towards Autonomous Agents for Live Computer Music: Realtime
Machine Listening and Interactive Music Systems’. PhD Thesis. Cambridge:
Cambridge University.
Cope, David. 1996. Experiments in Musical Intelligence. Vol. 12. Madison, Wisconsin:
A-R Editions.
Di Scipio, Agostino. 2003. ‘“Sound Is the Interface”: From Interactive to Ecosystemic
Signal Processing’, Organised Sound 8 (3): 269–277.
Dean, Roger T. 2009. ‘Envisaging Improvisation in Future Computer Music’ in The
Oxford Handbook of Computer Music, edited by Roger T. Dean, 133–47.
Oxford: Oxford University Press.
Dean, Roger T., and Alex McLean, eds. 2018. The Oxford Handbook of Algorithmic
Music. Oxford: Oxford University Press.
Eigenfeldt, Arne, Oliver Bown, Andrew R. Brown, and Philippe Pasquier. 2016.
‘Flexible Generation of Musical Form: Beyond Mere Generation’ in
Proceedings of the International Conference on Computational Creativity. Paris:
Computational Creativity.
Feenberg, Andrew. 2010. Between Reason and Experience: Essays in Technology and
Experience. Cambridge Mass: The MIT Press.
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
Fiebrink, Rebecca Anne. 2011. ‘Real-Time Human Interaction with Supervised
Learning Algorithms for Music Composition and Performance’. PhD Thesis,
Princeton: Princeton University.
Fiebrink, Rebecca. 2016. ‘Machine Learning as Meta-Instrument: Human-Machine
Partnerships Shaping Expressive Instrumental Creation’ in Musical Instruments
in the 21st Century: Identities, Configurations, Practices, edited by Till
Bovermann, Alberto de Campo, Hauke Egermann, Sarah-Indriyati
Hardjowirogo, and Stefan Weinzierl, 137–51. Singapore: Springer.
Gifford, Toby, and Andrew R. Brown. 2011. 'Beyond Reflexivity: Mediating between
Imitative and Intelligent Action in an Interactive Music System’ in Proceedings
of the 25th BCS Conference on Human-Computer Interaction, edited by Katie
Wilkie, Rose Johnson, and Simon Holland. Newcastle Upon Tyne.
Ingold, Tim, and Elizabeth Hallam. 2007. ‘Creativity and Cultural Improvisation: An
Introduction’ in Creativity and Cultural Improvisation, edited by Elizabeth
Hallam and Tim Ingold, 1–24. ASA Monographs 44. Oxford: Berg.
Turkle, Sherry. 1984. The Second Self: Computers and the Human Spirit. New York:
Simon and Schuster.
Krumhansl, Carol L., and Edward J. Kessler. 1982. ‘Tracing the Dynamic Changes in
Perceived Tonal Organization in a Spatial Representation of Musical Keys’,
Psychological Review 89 (4): 334–268.
Larson, Steve. 2005. ‘Composition versus Improvisation?’, Journal of Music Theory 49
(2): 241–275.
Leman, Marc. 2008. Embodied Music Cognition and Mediation Technology.
Cambridge, MA: The MIT Press.
Lerdahl, Fred, and Carol L. Krumhansl. 2007. ‘Modelling Tonal Tension’, Music
Perception 24 (4): 329–66.
Lewis, George E. 2000. ‘Too Many Notes: Complexity and Culture in Voyager’,
Leonardo Music Journal 10:33–39.
Lewis, George E. 2009. ‘Interactivity and Improvisation’ in The Oxford Handbook of
Computer Music, edited by Roger T. Dean, 457–66. Oxford: Oxford University
Press.
Malafouris, Lambros. 2008. ‘At the Potter’s Wheel: An Argument for Material Agency
in Material Agency: Towards a Non-Anthropocentric Approach, edited by Carl
Knappett and Lambros Malafouris, 19–36. Springer.
Brown, Andrew R. 2018. “Creative Improvisation with a Reflexive Musical Bot.” Digital Creativity 29 (1): 5–18. (pre-press)
Narmour, Eugene. 1992. The Analysis and Cognition of Melodic Complexity: The
Implication-Realization Model. Chicago: University of Chicago Press.
Pachet, François. 2006. ‘Enhancing Individual Creativity with Interactive Musical
Reflexive Systems’ in Musical Creativity: Multidisciplinary Research in Theory
and Practice, edited by Irène Deliège and Geraint A. Wiggins, 359. New York:
Psychology Press.
Peters, Gary. 2009. The Philosophy of Improvisation. University of Chicago Press.
Pressing, Jeff. 1988. ‘Improvisation: Methods and Models’ in Generative Processes in
Music: The Psychology of Performance, Improvisation and Composition, 129–
78. Oxford: Clarendon Press.
Rowe, Robert. 1993. Interactive Music Systems: Machine Listening and Composing.
Cambridge, MA: The MIT Press.
Sayer, Tim. 2016. ‘Cognitive Load and Live Coding: A Comparison with Improvisation
Using Traditional Instruments’, International Journal of Performance Arts and
Digital Media 12 (2): 129–138.
Temperley, David. 2001. The Cognition of Basic Musical Structures. Cambridge, MA:
The MIT Press.
Xia, Guangyu, and Roger B Dannenberg. 2015. ‘Duet Interaction: Learning
Musicianship for Automatic Accompaniment’ in Proceedings of the
International Conference on New Interfaces for Musical Expression, 259–64.
Baton Rouge, Louisiana: University of Louisiana.
Young, Michael, and Tim Blackwell. 2013. ‘Live Algorithms for Music: Can
Computers Be Improvisers?’ in The Oxford Handbook of Critical Improvisation
Studies, Volume 2, edited by Benjamin Piekut and George E. Lewis, 507–28.
New York: Oxford University Press.
... These form the basis for Machine Musicianship, in which music theory and performance technique are used in the modeling of artificial intelligence behavior (Rowe 2001). Machine Musicianship has since given rise to the study of algorithmic improvisation and musical collaboration with improvisational agents (Fremont 2019;Brown 2018). Artificial intelligence can also shift between the roles of "tool" and an "actor" when used in music creation (Caramiaux and Donnarumma 2020). ...
Article
Collaborative AI agents allow for human-computer collaboration in interactive software. In creative spaces such as musical performance, they are able to exhibit creative autonomy through independent actions and decision-making. These systems, called co-creative systems, autonomously control some aspects of the creative process while a human musician manages others. When users perceive a co-creative system to be more autonomous, they may be willing to cede more creative control to it, leading to an experience that users may find more expressive and engaging. This paper describes the design and implementation of a co-creative musical system that captures gestural motion and uses that motion to filter pre-existing audio content. The system hosts two neural network architectures, enabling comparison of their use as a collaborative musical agent. This paper also presents a preliminary study in which subjects recorded short musical performances using this software while alternating between deep and shallow models. The analysis includes a comparison of users' perceptions of the two models' creative roles and the models' impact on the subjects' sense of self-expression.
... To give the computer-generated musical material in this human-machine collaboration scenario a physical presence comparable to that of other traditional musical instruments the machine player here acts in an embodied form of a digital player piano 3 (cf. similar approaches e.g. in Brown, 2018, or the marimba-playing robot improvisor Shimon by Hoffman & Weinberg, 2010) instead of using loudspeakers for the actual sonic realization. Within this framing of a duo setting consisting of the player piano controlled by an AI system and a human musician, we are aiming at the exploration of various computational approaches for the interactive generation of musical material. ...
Conference Paper
Full-text available
This paper presents an ongoing interdisciplinary research project that deals with free improvisation and human-machine interaction , involving a digital player piano and other musical instruments. Various technical concepts are developed by student participants in the project and continuously evaluated in artistic performances. Our goal is to explore methods for co-creative collaborations with artificial intel-ligences embodied in the player piano, enabling it to act as an equal improvisation partner for human musicians.
... However, creativity in performance has received relatively little attention (Pinheiro 2010). More specifically, research into musical interaction activities with intelligent systems such as the Continuator (Pachet, 2003), Controlling Interactive Music (Brown, 2018) and Monterey Mirror (Manaris et al. 2018) present tools for contemporary music creation and co-creativity. However, our present work suggests a musical framework with a reflective agent that aims to elicit creativity by encouraging the human drummer to observe and refine their creative process. ...
Article
Full-text available
Shedding is a term used to describe a musical conversation between drummers with the aim to improve their drumming vocabulary, gain confidence in real-time trading of musical ideas, develop an understanding for their original voice on the drum kit and enjoy the process of exploring creativity with a fellow drummer. However, in practice drummers have limited opportunities to play in real time with other drummers. This research explores shedding activity in the form of mixed-initiative interaction between a human drummer and a conversational agent. This paper focuses on a series of design studies and experiments to explore three novel refinements to the proposed shedding model.
... Whilst a plethora of extra-musical communication channels are involved, such as physical gestures, eye contact, and even verbal cues, these are often seen as secondary across jazz (Hagberg 2017), free (Nunn 1998) and electroacoustic (Nort 2018) improvisation genres. Similarly, interactive music systems such as Cypher (Rowe 1992), OSCAR (Beyls 1988), Voyager (Lewis 1999) and CIM (Brown, Gifford and Voltz 2013) privilege this mode of 'performance-as-interface' (Brown 2018) whether or not some additional parametric controls are exposed. Thus, human evaluation in the humanmachine creative partnership can enter through the creative improvisation itself. ...
Article
Machines incorporating techniques from artificial intelligence and machine learning can work with human users on a moment-to-moment, real-time basis to generate creative outcomes, performances and artefacts. We define such systems collaborative, creative AI systems, and in this article, consider the theoretical and practical considerations needed for their design so as to support improvisation, performance and co-creation through real-time, sustained, moment-to-moment interaction. We begin by providing an overview of creative AI systems, examining strengths, opportunities and criticisms in order to draw out the key considerations when designing AI for human creative collaboration. We argue that the artistic goals and creative process should be first and foremost in any design. We then draw from a range of research that looks at human collaboration and teamwork, to examine features that support trust, cooperation, shared awareness and a shared information space. We highlight the importance of understanding the scope and perception of two-way communication between human and machine agents in order to support reflection on conflict, error, evaluation and flow. We conclude with a summary of the range of design challenges for building such systems in provoking, challenging and enhancing human creative activity through their creative agency.
... Several of the works in Play Nice explored sound-asinterface [18] and performance-as-interface [19] methods of human interaction by representing the live performative actions of humans as musebot messages, thereby placing a human agent within the virtual ensemble. This was done in a variety of ways. ...
Conference Paper
Full-text available
Musebots are autonomous musical agents that interact with other musebots to produce music. Inaugurated in 2015, musebots are now an established practice in the field of musical metacreation, which aims to automate aspects of creative practice. Originally musebot development focused on software-only ensembles of musical agents, coded by a community of developers. More recent experiments have explored humans interfacing with musebot ensembles in various ways: including through electronic interfaces in which parametric control of high-level musebot parameters are used; message-based interfaces which allow human users to communicate with musebots in their own language; and interfaces through which musebots have jammed with human musicians. Here we report on the recent developments of human interaction with musebot ensembles and reflect on some of the implications of these developments for the design of metacreative music systems.
... Flock (Knotts 2016) 2015 Algorithmic design Feedback from evolving agents (Brown 2018) 2016 Musical duet Infers musical roles were parsimonious for classification, eventually converging to the inclusion criteria listed in Section 1.2 and the descriptive axes discussed in Section 2.2. This process canvassed over 40 systems, selected to span a broad range of the design space, around half of which met our final criteria and are included in Table 1. ...
Article
Computational music systems that afford improvised creative interaction in real time are often designed for a specific improviser and performance style. As such the field is diverse, fragmented and lacks a coherent framework. Through analysis of examples in the field, we identify key areas of concern in the design of new systems, which we use as categories in the construction of a taxonomy. From our broad overview of the field, we select significant examples to analyse in greater depth. This analysis serves to derive principles that may aid designers scaffold their work on existing innovation. We explore successful evaluation techniques from other fields and describe how they may be applied to iterative design processes for improvisational systems. We hope that by developing a more coherent design and evaluation process, we can support the next generation of improvisational music systems.
Chapter
Electronic and acoustic sound and noise, including signal processed acoustic instruments, and autonomous IMSs (Interactive Music Systems), are key features of many online jam sessions. However, the types and qualities of sound that this introduces to an improvisation may be alien to many cross-cultural performers. This chapter explores the role of un-pitched, sound and noise, across musical cultures and how performers interpret them in intercultural tele-improvisation. Included in this examination, is how online cross-cultural performers perceive electronic sound, and the ways in which this shapes their interpretation and improvisatory responses. The investigation draws on the findings from the performance case studies and perspectives from practitioners and authors who incorporate these elements into their work. While exploratory in approach, consideration is also given to how networked performers engage with an IMS as a collaborative partner in intercultural tele-improvisatory context.
Chapter
Full-text available
This chapter analyses algorithmic music from the perspective of social and cultural theory. It surveys developments in the sociology of art that have sought to bring aesthetic theory into contact with classic sociological critique, and it surveys theories of mediation - particularly those that have sought to more fully account for the roles technical devices play in creativity. To this end, the chapter considers Actor-Network Theory (ANT) as a means to analyse the contributions of ‘non-human actors’ to the social world of algorithmic music. Two case studies are then discussed: first, the activities of the Bay Area algorithmic music pioneers, The Hub; and second, the group of musicians and artists centred around the practice of Live Coding. The example of The Hub raises the question of the relationship between technological change and artistic innovation. It argues that the external forces that bear on the instrumentarium of highly-technologised forms like algorithmic music should be considered as part of their social ecology. The analysis of Live Coding focuses on the way associated actors make use of the internet and world wide web both as a creative, communicative and social medium. It charts the online development of the TOPLAP manifesto to illustrate how, far from being a technological determination, the ‘true’ computer music that live coding seeks to articulate is an ongoing social negotiation that continues up to the present. The final section uses the Issue Crawler software to analyse networks of association within live coding in order to better understand the genres wider social makeup. I argue that the large-scale social, cultural, economic and political forces that sustain the field bear strongly on the aesthetic and conceptual terrain of the scene, particularly in regard to the genre’s careful negotiation of its relationship to ‘art’ and ‘popular’ histories of electronic music.
Conference Paper
Full-text available
This position paper proposes that creative practices can be usefully understood as agency networks. In particular it looks at interactive algorithmic musical practices and the takes a distributed view of the influences involved in such music making. The elements involved include humans, tools, culture and the physical environment that constitute a system or network of mutual influences. Such an agency network perspective is intended to be useful for the pragmatic tasks of designing new interactive music systems and developing new musical practices that utilise them. Drawing on previous research into generative music and computational creativity, various views on interactive music systems are canvassed and an approach to describing these as agency networks is developed. It is suggested that new human-machine musical practices may arise as a result of adopting an agency network perspective and that these, in turn, can drive cultural innovations.
Book
Musicians begin formal training by acquiring a body of musical concepts commonly known as musicianship. These concepts underlie the musical skills of listening, performance, and composition. Like humans, computer music programs can benefit from a systematic foundation of musical knowledge. This book explores the technology of implementing musical processes such as segmentation, pattern processing, and interactive improvisation in computer programs. It shows how the resulting applications can be used to accomplish tasks ranging from the solution of simple musical problems to the live performance of interactive compositions and the design of musically responsive installations and Web sites. Machine Musicianship is both a programming tutorial and an exploration of the foundational concepts of musical analysis, performance, and composition. The theoretical foundations are derived from the fields of music theory, computer music, music cognition, and artificial intelligence. The book will be of interest to practitioners of those fields, as well as to performers and composers.The concepts are programmed using C++ and Max. The accompanying CD-ROM includes working versions of the examples, as well as source code and a hypertext document showing how the code leads to the program's musical functionality.
Book
In The Second Self , Sherry Turkle looks at the computer not as a "tool," but as part of our social and psychological lives; she looks beyond how we use computer games and spreadsheets to explore how the computer affects our awareness of ourselves, of one another, and of our relationship with the world. "Technology," she writes, "catalyzes changes not only in what we do but in how we think." First published in 1984, The Second Self is still essential reading as a primer in the psychology of computation. This twentieth anniversary edition allows us to reconsider two decades of computer culture--to (re)experience what was and is most novel in our new media culture and to view our own contemporary relationship with technology with fresh eyes. Turkle frames this classic work with a new introduction, a new epilogue, and extensive notes added to the original text. Turkle talks to children, college students, engineers, AI scientists, hackers, and personal computer owners--people confronting machines that seem to think and at the same time suggest a new way for us to think--about human thought, emotion, memory, and understanding. Her interviews reveal that we experience computers as being on the border between inanimate and animate, as both an extension of the self and part of the external world. Their special place betwixt and between traditional categories is part of what makes them compelling and evocative. (In the introduction to this edition, Turkle quotes a PDA user as saying, "When my Palm crashed, it was like a death. I thought I had lost my mind.") Why we think of the workings of a machine in psychological terms--how this happens, and what it means for all of us--is the ever more timely subject of The Second Self .
Chapter
In this chapter, I describe how supervised learning algorithms can be used to build new digital musical instruments. Rather than merely serving as methods for inferring mathematical relationships from data, I show how these algorithms can be understood as valuable design tools that support embodied, real-time, creative practices. Through this discussion, I argue that the relationship between instrument builders and instrument creation tools warrants closer consideration: the affordances of a creation tool shape the musical potential of the instruments that are built, as well as the experiences and even the creative aims of the human builder. Understanding creation tools as “instruments” themselves invites us to examine them from perspectives informed by past work on performer-instrument interactions.
Article
This paper explores the claim that live coding is a ‘real-time’ improvisatory activity by examining the difference in the temporal frame used by this and traditional instrumental improvisation, for the creation of novel musical expression. It posits the notion that because live coding requires less complex motor skills than instrumental improvisation, it may be less susceptible to certain types of mechanical modes of musical expression which inhibit musical novelty. This hypothesis is developed to include the concept of goal states, models of memory and cognitive load, as a means of mapping this territory and to provide an understanding of the various perceptual domains with which a coder engages during a live extemporised performance. This work will engage in a comparative discourse relating live coding to instrumental improvisation, as a point of departure for the understanding of cognitive functioning in this rapidly developing performance paradigm.
Article
This article offers a fairly speculative consideration of opportunities in sonic improvisation using computers, since the field is moving rapidly. The literature mentioned and the emphases discussed here are those of the period since roughly 2001. This article summarizes some core features and perennial issues involved in improvisation. Improvisation involves computers, whether as passive mediator or by means of some more extensive "active" generation or processing of data in the computer's central processing unit. It is considered as the topic primarily from the perspective of sound creators and of listeners. Furthermore, this article considers specific roles of computers in improvisation. Finally, it concludes with a discussion of their future, even quite long term, and their improvising potentials.