Article

Computational systems for music improvisation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Computational music systems that afford improvised creative interaction in real time are often designed for a specific improviser and performance style. As such the field is diverse, fragmented and lacks a coherent framework. Through analysis of examples in the field, we identify key areas of concern in the design of new systems, which we use as categories in the construction of a taxonomy. From our broad overview of the field, we select significant examples to analyse in greater depth. This analysis serves to derive principles that may aid designers scaffold their work on existing innovation. We explore successful evaluation techniques from other fields and describe how they may be applied to iterative design processes for improvisational systems. We hope that by developing a more coherent design and evaluation process, we can support the next generation of improvisational music systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... According to Robert Rowe, interactive computer music systems are "those whose behavior changes in response to musical input" (Rowe, 1992). Rowe's seminal work provides further classification of such systems, built on the combination of three dimensions: (i) drive-a binary classification into score-driven or performance-driven; (ii) response method-a ternary classification into transformative, generative, or sequenced, and (iii) paradigm-a continuous spectrum from "instrument" to "player" (Gifford et al., 2018). ...
... Working at Bell Laboratories under the sponsorship of aforementioned Max Mathews, Laurie Spiegel utilized the advantages of MIDI to program the Music Mouse-one of the first interactive music systems for general use-in 1986. Best described as a system to control a musical automaton (Gifford et al., 2018), the program featured embedded knowledge of scales and chords, including "all of Bach's favorite manipulations-retrograde, inversion, augmentation, diminution, transposition" (Cope, 1991). Activated notes were harmonized and stylistically transformed through the selection of preferred modes from a computer window. ...
... Voyager combined stochastic selection methods and musical constraints to create an interactive dialog between musician and machine (Gifford et al., 2018). The program integrated Lewis' framework of African-American cultural practice and embodied the aesthetics of multidominance, which runs counter to the Western music philosophy of avoiding "too many notes" (Lewis, 2000). ...
Thesis
Full-text available
A mixed-initiative user interface is one where both human and computer contribute proactively to a process. A mixed-initiative creative interface is the same principle applied in the domain of computational creativity support, such as in digital production of music or visual arts. The title “Mixed-Initiative Music Making” therefore implies a kind of music making that puts human and computer in a tight interactive loop, and where each contributes to modifying the output of the other. Improvisational collective music making is often referred to as jamming. This thesis focuses on jamming-oriented approaches to music making, which takes advantage of the emergent novelty created by group dynamics. The research question is: How can a mixed-initiative interactive music system aid human musicians in the initial ideation stage of music making? Starting from a vantage point of dynamical systems theory, I have addressed this question by adopting a Research through Design approach within a methodological framework of triangulation between theory, observation, and design. I have maintained a focus on the activity of collective music making through four studies over a period of two years, where the gradual development of a mixed-initiative interactive music system has been informed by findings from these studies. The first study was a focus group with musicians experienced in collective music making, where the goal was to establish commonalities in musical interaction and idea development with a focus on viable conceptual frameworks for subsequent studies. The second study was a case study of two improvising musicians engaged in an improvised session. They were separated in two rooms, and could only communicate instrumentally or through preset commands on a computer screen. The session was analyzed in terms of how the musicians dynamically converged and diverged, and thus created musical progression. In the third study, several musicians were invited to jam with a prototype of an interactive music system. Unbeknownst to them, they had been recruited to a Wizard of Oz study—behind the scenes was a human keyboard player pretending to be a computational agent. The purpose of this arrangement was to obtain empirical data about how musicians experience co-creativity with a perceived computational agent before the implementation of the computational agent had begun in earnest. In the final study, two different implementations of a mixed-initiative interactive music system were developed for a comparative user study, where the tradeoff between user control and system autonomy was a central premise. Combined, the studies show that a mixed-initiative interactive music system offers musicians freedom from judgement and freedom to explore their own creativity in relation to an unknown agency. Social factors make these kinds of freedom difficult to attain with vi other musicians. Hence, playing with interactive music systems can lead to different kinds of musical interaction than can be achieved between people. An acceptance of machine aesthetics may lead to surprising creative results. Repeated exposure to mixed-initiative interactive music systems could help cultivate attitudes that are valuable for collective music making in general, such as maintaining a process-oriented approach and accepting the loss of idea ownership.
... To contextualize further, in this article we focus on a type of cocreative musical agent with what Gifford et al. (2018) describe as having a perceived degree of creative agency, and which display emergent, complex dynamics appearing as a capacity to improvise together with humans in autonomous ways. The notion of nonhuman entities having Deterding et al., 2017.) ...
... Arguably, systems with creative agency and coimprovising potential began appearing in the late 1980s. Many of these are identified and categorized by Tatar and Pasquier (2018) and by Gifford et al. (2018), including several pioneering systems such as Oscar (Beyls 1988), Cypher (Rowe 1992), GenJam (Biles 1994), BoB (Thom 2000), The Continuator (Pachet 2003), and OMax (Assayag et al. 2006), to name only a few. Of particular relevance for Co-Creative Spaces is trombonist George Lewis's (2000) improvisation system Voyager, which he developed towards the end of the 1980s. ...
Article
With the latest developments in AI it is becoming increasingly common to view machines as cocreators. In this article, we follow four musicians in the project Co-Creative Spaces through a six-month long collaborative process, in which they created new music through improvising with each other and—subsequently—with computer-based imitations of themselves. These musical agents were trained through machine learning to generate output in the style of the musicians and were capable of both following what they “heard” and initiating new directions in the interaction, leading to the question “What happens to musical cocreation when AI is thus included in the creative cycle?” The musicians involved in Co-Creative Spaces are from Norway and Kenya—two countries with fundamentally different musical traditions. This leads to a second question: “How is the collaboration affected by possible cultural biases inherent in the technology and in the musicians themselves?” These questions were examined as part of two five-day workshops—one at the beginning and one at the end of the project period—before two final concerts. The musicians engaged in improvisation sessions and recorded ensuing discussions. For each workshop day, the musicians also had conversations in focus groups moderated by a fifth project member, who, together with one of the musicians, was also responsible for the development of the software powering the musical agents. The analysis of the data from the workshops paints a complex picture of what it is like being at the intersection between different technological, musical, and cultural paradigms. The machine becomes a cocreator only when humans permit themselves to attribute creative agency to it.
... Examples of implementations employing 'feature extraction' include many of the systems surveyed inGifford et al. (2018) as well as the system documented inMogensen (2020). 5 I use the idea of 'intertextual network' in the sense ofKlein (2005). ...
... II. 25Gifford et al. (2018), pp. 19-20. ...
Article
Full-text available
I investigate the intersection of the concepts ‘creativity’ and ‘computation’ in the context of improvised music. While these concepts are commonly thought of as opposites, I argue that they can be intimately interlinked when humans and computational systems contribute to improvised music performance. I take human creativity and computational creativity to be categorically different. However, computational creativity in improvised music may be grounded in a ‘knowing how’ to improvise computationally and may contribute to the distributed creativity of a human-machine performance system. The semantics of humans and computational systems are of different categories and their respective musical ‘purposefulness’ are also categorically different. However, these differences allow interaction; and when engaged in group improvisation both humans and computational systems can be engaged in contributing to a co- creative improvised music performance.
... In contemporary music making the impact of Artificial Intelligence (AI) is felt across creative practice, from composition [17,29], interpretation [10,52], improvisation [21,40,43], to accompaniment [34], and across the Music Industries from creation and production [26,33], protection [12,24], distribution [1], to consumption [39]. The breadth of reach of AI systems in music making raises pressing questions about how AI, especially Generative AI (GenAI) systems, impacts our interaction with sound and music, how we play together, and how GenAI might foster or hinder creativity. ...
Preprint
Full-text available
The impact of Artificial Intelligence is felt on every stage of contemporary musicking and is shaping our interaction with sound. Deep learning Generative AI (GenAI) systems for high-quality music generation rely on extremely large musical datasets for training. As a result, AI models tend to be trained on dominant mainstream musical genres, such as Western classical music, where large datasets are more readily available. In addition, the reliance on extremely powerful computing resources for deep learning creates barriers to use and negatively impacts our environment. This paper reports on contemporary concerns and interests of musicians, researchers, and music industry stakeholders in the responsible use of GenAI models for music and audio. Through analysis of focus group discussions and exemplar case studies of the use of GenAI in music making at a hybrid workshop of 148 participants, we offer insights into current discourses about the use of GenAI beyond dominant musical styles and suggest ways forward to increase creative agency in music making beyond the mainstream. Our findings highlight the value of small datasets of music for GenAI, the suitability of AI models for working with small datasets of music, and pose questions around what constitutes a 'small' dataset of music.
... A question arises whether we consider the code part of the system, the coder's reasoning and performative processes, or an autonomous entity. Thus, it is related to how we ascribe agency to the code, and this opens a wider discussion on aesthetic appreciation [43], which go beyond the scope of the article. For instance, Tidal has an inherent tempo clock which can be seen as a clock-based system action. ...
Conference Paper
Full-text available
Music-making with live coding is a challenging endeavour during a performance. Contrary to traditional music performances, a live coder can be uncertain about how the next code evaluation will sound. Interactive artificial intelligence (AI) offers numerous techniques for generating future outcomes. These can be implemented on both the level of the liveness of the code and also on the generated musical sounds. I first examine the structural characteristics of various live coding systems that use agent-based technologies and present a high-level diagrammatic representation. I sketch simple block diagrams that enable me to construct a conceptual framework for designing agent-based systems. My aim is to provide a practical framework to be used by practitioners. This study has two parts: i) a high-level diagrammatic representation informed by previous studies, where I analyze patterns of interaction in eight live coding systems, and ii) a conceptual framework for designing agent-based performance systems by combining both liveness and machine listening. I identify diverse patterns of interactivities between the written code and the generated music, and I draw attention to future perspectives. One code snippet for SuperCollider is provided and mapped to the conceptual framework. The vision of the study is to raise awareness on interactive AI systems within the community and potentially help newcomers navigating in the vast potential of live coding.
... Regarding the current state of research, Tatar and Pasquier (2019) analyzed 78 musical agents using 13 design criteria, but their work was directed towards autonomous agents and not limited to real-time contexts. Gifford et al. (2018) analyzed 23 real-time systems and suggested a taxonomy focused on computational co-improvisation with human performers (Gifford et al. 2018, 33). This recent work is recommended for readers who are not familiar with the general field of IMPS. ...
Article
Full-text available
This article attempts to define and typologize the main principles for the design of Intelligent Music Performance Systems (IMPS). It presents a three-dimensional framework based on studies of proxemics combined with findings from AI design research. Each of these dimensions – embodiment, participation, and autonomy – is presented together with existing taxonomies, then integrated into an analytical framework. This framework informs the discussion of nine historical cases of IMPS from the 1950s to the present, to gain a refined understanding of their interactive design. The discussion leads back to three main tendencies in IMPS design – instruments, systems, and agents – and the article concludes by combining the proposed framework with ideas from Speculative Design Research.
... For a comprehensive overview of the deep learning models for music generation, we refer to [11], [12], [60], [73]. References [18], [46], [49], [61], [95], [107], [139] provide the overview of the music generation systems, and algorithmic composition of music. Additionally, the authors in [10] survey the application of robotics in music generation tasks. ...
Article
Full-text available
Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of cloning musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. Moreover, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations.
... As an alternative to fully automated music generation, which transfers the whole creative task to the machine, co-creativity implies that an algorithm is rather used as a tool by a composer (Esling and Devis, 2020) -and this requires some steerability of the AI tools (Louie et al., 2020). As an example, co-improvisation systems (Assayag et al., 2010;Gifford et al., 2018) such as ImproteK (Nika et al., 2017) are usually based on a real-time interaction between the human musicians and the machine. Each performer (human or machine) listens to the music produced by the other and responds appropriately, bringing on the musical discourse in a novel way each time. ...
Article
Full-text available
Musical co-creativity aims at making humans and computers collaborate to compose music. As an MIR team in computational musicology, we experimented with co-creativity when writing our entry to the “AI Song Contest 2020”. Artificial intelligence was used to generate the song’s structure, harmony, lyrics, and hook melody independently and as a basis for human composition. It was a challenge from both the creative and the technical point of view: in a very short time-frame, the team had to adapt its own simple models, or experiment with existing ones, to a related yet still unfamiliar task, music generation through AI. The song we propose is called “I Keep Counting”. We openly detail the process of songwriting, arrangement, and production. This experience raised many questions on the relationship between creativity and machine, both in music analysis and generation, and on the role AI could play to assist a composer in their work. We experimented with 'AI as automation', mechanizing some parts of the composition, and especially 'AI as suggestion' to foster the composer’s creativity, thanks to surprising lyrics, uncommon successions of sections and unexpected chord progressions. Working with this material was thus a stimulus for human creativity.
... e.g. Miranda, Wanderley, & Kirk, 2006) to virtual autonomous players with a higher degree of creative agency (Gifford et al., 2018). ...
Conference Paper
Full-text available
This paper presents an ongoing interdisciplinary research project that deals with free improvisation and human-machine interaction , involving a digital player piano and other musical instruments. Various technical concepts are developed by student participants in the project and continuously evaluated in artistic performances. Our goal is to explore methods for co-creative collaborations with artificial intel-ligences embodied in the player piano, enabling it to act as an equal improvisation partner for human musicians.
... In particular, further research seems worthwhile to add more pieces to the puzzle of the various interface aspects. In particular, the comprehensive comparisons of existing systems in Tatar and Pasquier (2019) or Gifford et al. (2018) provide a substantial basis for evaluating and extending the concept described here. Furthermore, if one follows the idea of an inseparable unity of human and machine in the creative production of the Musical Cyborgs, it reveals that even without speculation about future universal artificial intelligences, equal cooperation is already possible, if we leave aside the deficit view on our technological partners. ...
Conference Paper
Full-text available
The concept of Musical Cyborgs follows Donna Haraway's "Cyborg Manifesto" to describe a non-binary approach for human-machine collaboration with blurred borders between biological and cybernetic worlds. Interface dimensions of embodiment, instrumentality, authenticity , creativity, learning, and aesthetics therein unfold between intentional and self-organizing autonomy and are discussed with their specific requirements , conditions and consequences.
... The notion that "if jazz tells no story it is simply not good" is widespread among jazz masters (p. 17,18) "the problem today is that good improvisers are so rare. There are many people who can make sense out of their improvisation, but very few are really saying anything." ...
Research
Full-text available
A highly controversial entrance of Artificial Intelligence (AI) music generators in the world of music composition and performance is currently advancing. A fruitful research from Music Information Retrieval, Neural Networks and Deep Learning, among other areas, are shaping this future. Embodied and non-embodied AI systems have stepped into the world of jazz in order to co-create idiomatic music improvisations. But how musical these improvisations are? This research looks at the resulted melodic improvisations produced by Artificial Intelligence systems such as OMax, ImproteK and Djazz (OID) AI generators through the lens of the elements of music and it does so from a performer’s point of view. The analysis is based mainly on the evaluation of already published results as well as on a case study I carry out which includes performance, listening and evaluation of generated improvisations of OMax. The research also reflects upon philosophical issues, cognitive foundations of emotion and meaning and provides a comprehensive analysis of the functionality of OID.
... Among the crucial goals in pursuing automation of creativity and intelligent behavior is music composition, of which the idea can be dated back for centuries [1], long before the development of modern computers composed of electronic components, such as vacuum tubes and transistors. The historical trajectory of the application of computer methodologies on music composition or generation can be traced via reviews and surveys in various related domains, such as evolutionary computation [2][3][4], computational intelligence and creativity [5][6][7], deep learning [8], and artificial intelligence [9][10][11][12]. Most of the studies in existence focus on the generation of musical sequences that consist of notes with little or no involvement of composers or music artists. ...
Article
Full-text available
Creative behavior is one of the most fascinating areas in intelligence. The development of specific styles is the most characteristic feature of creative behavior. All important creators, such as Picasso and Beethoven, have their own distinctive styles that even non-professional art lovers can easily recognize. Hence, in the present work, attempting to achieve cantus firmus composition and style development as well as inspired by the behavior of natural ants and the mechanism of ant colony optimization (ACO), this paper firstly proposes a meta-framework, called ants on multiple graphs (AntsOMG), mainly for roughly modeling creation activities and then presents an implementation derived from AntsOMG for composing cantus firmi, one of the essential genres in music. Although the mechanism in ACO is adopted for simulating ant behavior, AntsOMG is not designed as an optimization framework. Implementations can be built upon AntsOMG in order to automate creation behavior and realize autonomous development on different subjects in various disciplines. In particular, an implementation for composing cantus firmi is shown in this paper as a demonstration. Ants walk on multiple graphs to form certain trails that are composed of the interaction among the graph topology, the cost on edges, and the concentration of pheromone. The resultant graphs with the distribution of pheromone can be interpreted as a representation of cantus firmus style developed autonomously. Our obtained results indicate that the proposal has an intriguing effect, because significantly different styles may be autonomously developed from an identical initial configuration in separate runs, and cantus firmi of a certain style can be created in batch simply by using the corresponding outcome. The contribution of this paper is twofold. First, the presented implementation is immediately applicable to the creation of cantus firmi and possibly other music genres with slight modifications. Second, AntsOMG, as a meta-framework, may be employed for other kinds of autonomous development with appropriate implementations.
... However, real-time music performance systems have existed for nearly half a century (Eigenfeldt 2007). Gifford, Knotts, McCormack, Kalonaris, Yee-King and d'Inverno (2018) undertook a survey of computational systems for music improvisation, and developed a taxonomy through a detailed examination of 23 indicative systems, covering all major approaches. Their key findings included the idea that system complexity had little influence on the perceived creative agency of the artificial improviser, and the conceptualisation of the system as a creative partner dates back more than 30 years. ...
Article
Machines incorporating techniques from artificial intelligence and machine learning can work with human users on a moment-to-moment, real-time basis to generate creative outcomes, performances and artefacts. We define such systems collaborative, creative AI systems, and in this article, consider the theoretical and practical considerations needed for their design so as to support improvisation, performance and co-creation through real-time, sustained, moment-to-moment interaction. We begin by providing an overview of creative AI systems, examining strengths, opportunities and criticisms in order to draw out the key considerations when designing AI for human creative collaboration. We argue that the artistic goals and creative process should be first and foremost in any design. We then draw from a range of research that looks at human collaboration and teamwork, to examine features that support trust, cooperation, shared awareness and a shared information space. We highlight the importance of understanding the scope and perception of two-way communication between human and machine agents in order to support reflection on conflict, error, evaluation and flow. We conclude with a summary of the range of design challenges for building such systems in provoking, challenging and enhancing human creative activity through their creative agency.
... In large part, a system is a by-product of its designer, implicitly embedding her aesthetic/artistic objectives and preferences. To a large extent, the system's architecture (Gifford et al. 2018) will determine its creative potential. For example, a corpus-based system operating primarily in combinational and exploratory methods, will exhibit mostly p-creativity (Boden 1991). ...
Preprint
Full-text available
Contemporary musical aesthetics, as a field in the humanities , does not typically argue for the existence of aesthetic universals. However, in the field of computational creativity, universals are actively sought, with a view to codification and implementation. This article critiques some statistical and information-based methods that have been used in computational creativity, in particular their application in assessing aesthetic value of musical works, rather than the more modest claim of stylistic characterization. Standard applications of Zipf's Law and Information Rate are argued to be inadequate as computational measures of aesthetic value in musical styles where noise, repetition or stasis are valued features. We describe three of these musical expressions, each with its own aesthetic criteria, and examine several exemplary works for each. Lacking, to date, is a computational framework able to account for socio-political and historical implications of creative processes. Beyond quantitative evaluations of artistic phenomena, we argue for deeper intersections between computer science, philosophy, history and psychology of art.
... Where DMI design tends to focus on properties such as controllability, expressiveness, diversity and the capacity to demonstrate virtuosity [20], and IMS design tends towards the implementation of autonomous computational music agents for collaborative human-computer creativity, this algoskin adopted a hybrid metaphor which some of the authors [21] have previously described as an 'improvisational interface'; where generative music processes are used to elaborate on human input in a stylistically appropriate manner, potentially scaffolding human creativity in circumstances where complete human control would be difficult (see [22] for some examples) ...
Conference Paper
Full-text available
Musebots are autonomous musical agents that interact with other musebots to produce music. Inaugurated in 2015, musebots are now an established practice in the field of musical metacreation, which aims to automate aspects of creative practice. Originally musebot development focused on software-only ensembles of musical agents, coded by a community of developers. More recent experiments have explored humans interfacing with musebot ensembles in various ways: including through electronic interfaces in which parametric control of high-level musebot parameters are used; message-based interfaces which allow human users to communicate with musebots in their own language; and interfaces through which musebots have jammed with human musicians. Here we report on the recent developments of human interaction with musebot ensembles and reflect on some of the implications of these developments for the design of metacreative music systems.
Article
Machine learning (ML) deals with algorithms able to learn from data, with the primary aim of finding optimum solutions to perform tasks autonomously. In recent years there has been development in integrating ML algorithms with live coding practices, raising questions about what to optimize or automate, the agency of the algorithms, and in which parts of the ML processes one might intervene midperformance. Live coding performance practices typically involve conversational interaction with algorithmic processes in real time. In analyzing systems integrating live coding and ML, we consider the musical and performative implications of the “moment of intervention” in the ML model and workflow, and the channels for real-time intervention. We propose a framework for analysis, through which we reflect on the domain-specific algorithms and practices being developed that combine these two practices.
Article
In this article we introduce the coadaptive audiovisual instrument CAVI. This instrument uses deep learning to generate control signals based on muscle and motion data of a performer's actions. The generated signals control time-based live sound-processing modules. How does a performer perceive such an instrument? Does it feel like a machine learning-based musical tool? Or is it an actor with the potential to become a musical partner? We report on an evaluation of CAVI after it had been used in two public performances. The evaluation is based on interviews with the performers, audience questionnaires, and the creator's self-analysis. Our findings suggest that the perception of CAVI as a tool or actor correlates with the performer's sense of agency. The perceived agency changes throughout a performance based on several factors, including perceived musical coordination, the balance between surprise and familiarity, a “common sense,” and the physical characteristics of the performance setting.
Article
The present work aims to improve students’ interest in music teaching and promote modern teaching. A distributed application system of artificial intelligence gesture interactive robot is designed through deep learning technology and applied to music perception education. First, the user’s gesture instruction data is collected through the double channel convolution neural network (DCCNN). It uses the double-size convolution kernel to extract feature information in the image and collect the video frame’s gesture instruction. Secondly, a two-stream convolutional neural network (two-stream CNN) recognizes the collected gesture instruction data. The spatial and temporal information is extracted from RGB color mode (RGB) images and optical flow images and input into the two-stream CNN to fuse the prediction results of each network as the final detection result. Then, the distributed system used by the interactive robot is introduced. This structure can improve the stability of the interactive systems and reduce the requirements for local hardware performance. Finally, experiments are conducted to test the gesture command acquisition and recognition network, and the performance of the gesture interactive robot in practice. The results indicate that combining convolution kernels of [Formula: see text] and [Formula: see text] can increase the recognition accuracy of DCCNN to 98% and effectively collect gesture instruction data. The gesture recognition accuracy of two-stream CNN after training reaches 90%, higher than the mainstream dynamic gesture recognition algorithm trained with the same data set. Finally, the recognition test of gesture instructions is carried out on the gesture interactive robot reported here. The results show that the recognition accuracy of the gesture interactive robots is more than 90%, meeting the routine interaction needs. Therefore, the interactive gesture robot has good reliability and stability and is applicable to music perception teaching. The research reported here has guiding significance for establishing music teaching with multiple perception modes.
Article
Full-text available
This article explores the notion of human and computational creativity as well as core challenges for computational musical creativity. It also examines the philosophical dilemma of computational creativity as being suspended between algorithmic determinism and random sampling, and suggests a resolution from a perspective that conceives of “creativity” as an essentially functional concept dependent on a problem space, a frame of reference (e.g. a standard strategy, a gatekeeper, another mind, or a community), and relevance. Second, this article proposes four challenges for artificial musical creativity and musical AI: (1) the 'cognitive challenge' that musical creativity requires a model of music cognition, (2) the 'challenge of the external world', that many cases of musical creativity require references to the external world, (3) the 'embodiment challenge', that many cases of musical creativity require a model of the human body, the instrument(s) and the performative setting in various ways, (4) the 'challenge of creativity at the meta-level', that musical creativity across the board requires creativity at the meta-level. Based on these challenges it is argued that the general capacity of music and its creation fundamentally involves general (artificial) intelligence and that therefore musical creativity at large is fundamentally an AI-complete problem.
Article
This article has been retracted: please see Elsevier Policy on Article Withdrawal (https://www.elsevier.com/about/our-business/policies/article-withdrawal). This article has been retracted at the request of the Editors-in-Chief. After a thorough investigation, the Editors have concluded that the acceptance of this article was partly based upon the positive advice of one illegitimate reviewer report. The report was submitted from an email account which was provided to the journal as a suggested reviewer during the submission of the article. Although purportedly a real reviewer account, the Editors have concluded that this was not of an appropriate, independent reviewer. This manipulation of the peer-review process represents a clear violation of the fundamentals of peer review, our publishing policies, and publishing ethics standards. Apologies are offered to the reviewer whose identity was assumed and to the readers of the journal that this deception was not detected during the submission process.
Article
Full-text available
This paper describes the background and motivations behind the author’s electroacoustic game-pieces Pathfinder (2016) and ICARUS (2019), designed specifically for his performance practice with an augmented drum kit. The use of game structures in music is outlined, while musical expression in the context of commercial musical games using conventional game controllers is discussed. Notions such as agility, agency and authorship in music composition and improvisation are in parallel with game design and play, where players are asked to develop skills through affordances within a digital game-space. It is argued that the recent democratisation of game engines opens a wide range of expressive opportunities for real-time game-based improvisation and performance. Some of the design decisions and performance strategies for the two instrument-controlled games are presented to illustrate the discussion; this is done in terms of game design, physical control through the augmented instrument, live electronics and overall artistic goals of the pieces. Finally, future directions for instrument-controlled electroacoustic game-pieces are suggested.
Article
Full-text available
Electronic systems designed to improvise with a live instrumental performer are a constant mediation of musical language and artificial decision-making. Often these systems are designed to elicit a reaction in a very broad way, relying on segmenting and playing back audio material according to a fixed or mobile set of rules or analysis. As a result, such systems can produce an outcome that sounds generic across different improvisers, or restrict meaningful electroacoustic improvisation to those performers with a matching capacity for designing improvisatory electroacoustic processing. This article documents the development of an improvisatory electroacoustic instrument for pianist Maria Donohue as a collaborative process for music-making. The Donohue+ program is a bespoke electroacoustic improvisatory system designed to augment the performance capabilities of Maria, enabling her to achieve new possibilities in live performance. Through the process of development, Maria’s performative style, within the broader context of free improvisation, was analysed and used to design an interactive electronic system. The end result of this process is a meaningful augmentation of the piano in accordance with Maria’s creative practice, differing significantly from other improvising electroacoustic instruments she has previously experimented with. Through the process of development, Donohue+ identifies a practice for instrument design that engages not only with a performer’s musical materials but also with a broader free improvisation aesthetic.
Article
A rule-based method and a note-by-note generation technique have been proposed to develop a model of an interactive melody generator. The task of the model has been to assist user creating a melody in form of a note sequence. A set of notes candidate has been generated by the system as a recommendation for user to arrange a note sequence. The rules to set the recommendation have been constructed by identifying melodic features using a sequential mining algorithm called Apriori based on Functions in a Sequence (AFiS) algorithm. The experiment has been conducted by developing Gamelan Composer, an interactive melody generator system for gamelan music, a traditional music from Java, Indonesia. Two sets of rules have been defined based on notes prunes technique, which have been a 2-itemset prune and a tier prune, and then they have been implemented into two different systems. The evaluation has been conducted using expert test to judge note sequences generated by users in the experimental groups. The results show that the proposed model can assist users creating a note sequence that has the characteristics of a gamelan melody, and the tier prune rules have generated a note sequence that has the characteristics of gamelan melody, which has been better than the 2-itemset prune rules
Article
Full-text available
This paper discusses improvisatory musical interactions between a musician and a machine. The focus is on duet performances, in which a human pianist and the Controlling Interactive Music (CIM) software system both perform on mechanized pianos. It also discusses improvisatory behaviours, using reflexive strategies in machines, and describes interfaces for musical communication and control between human and machine performers. Results are derived from trials with six expert improvising musicians using CIM. Analysis reveals that creative partnerships are fostered by several factors. The reflexive generative system provides aesthetic cohesion by ensuring that generated material has a direct relationship to that played by the musician. The interaction design relies on musical communication through performance as the primary mechanism for feedback and control. It can be shown that his approach to musical human-machine improvisation allows technical concerns to fall away from the musician's awareness and attention to shift to the musical dialogue within the duet.
Article
Full-text available
Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.
Conference Paper
Full-text available
A musebot is defined as a piece of software that autonomously creates music collaboratively with other musebots. The musebot project is concerned with putting together musebot ensembles, consisting of community-created musebots, and setting them up as ongoing autonomous musical installations. The specification was released early in 2015, and several developers have contributed musebots to ensembles that have been presented in the USA, Canada, and Italy. To date, there are over sixty publically available musebots. Furthermore, the author has used the musebot protocol in several personal MuMe projects, as they have provided a flexible method for generative systems in performance and installation. This paper will review the past year, and how musebots have been used in both their original community-oriented installations, as well as the author's works.
Conference Paper
Full-text available
The maturation process of the NIME field has brought a growing interest in teaching the design and implementation of Digital Music Instruments (DMIs) as well as in finding objective evaluation methods to assess the suitability of these outcomes. In this paper we propose a methodology for teaching NIME design and a set of tools meant to inform the design process. This approach has been applied in a master course focused on the exploration of expressiveness and on the role of the mapping component in the NIME creation chain, through hands-on and self-reflective approach based on a restrictive setup consisting of smart-phones and the Pd programming language. Working Groups were formed, and a 2-step DMI design process was applied, including 2 performance stages. The evaluation tools assessed both System and Performance aspects of each project, according to Listeners' impressions after each performance. Listeners' previous music knowledge was also considered. Through this methodology, students with different backgrounds were able to effectively engage in the NIME design processes, developing working DMI prototypes according to the demanded requirements; the assessment tools proved to be consistent for evaluating NIMEs systems and performances, and the fact of informing the design processes with the outcome of the evaluation, showed a traceable progress in the students' outcomes.
Article
Full-text available
We propose a system, the Continuator, that bridges the gap between two classes of traditionally incompatible musical systems: (1) interactive musical systems, limited in their ability to generate stylistically consistent material, and (2) music imitation systems, which are fundamentally not interactive. Our purpose is to allow musicians to extend their technical ability with stylistically consistent, automatically learnt material. This goal requires the ability for the system to build operational representations of musical styles in a real time context. Our approach is based on a Markov model of musical styles augmented to account for musical issues such as management of rhythm, beat, harmony, and imprecision. The resulting system is able to learn and generate music in any style, either in standalone mode, as continuations of musician’s input, or as interactive improvisation back up. Lastly, the very design of the system makes possible new modes of musical collaborative playing. We describe the architecture, implementation issues and experimentations conducted with the system in several real world contexts.
Article
Full-text available
Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music. An author's postprint (same content, but before it has been put into journal-specific formatting) is available via my institutional repository at https://kar.kent.ac.uk/cgi/users/home?screen=EPrint::View&eprintid=42379
Conference Paper
Full-text available
Loop pedals are real-time samplers that playback audio played previously by a musician. Such pedals are routinely used for music practice or outdoor “busking”. However, loop pedals always playback the same material, which can make performances monotonous and boring both to the musician and the audience, preventing their widespread uptake in professional concerts. In response, we propose a new approach to loop pedals that addresses this issue, which is based on an analytical multi-modal representation of the audio input. Instead of simply playing back pre-recorded audio, our system enables real-time generation of an audio accompaniment reacting to what is currently being performed by the musician. By combining different modes of performance –- e.g., bass line, chords, solo -- from the musician and system automatically, solo musicians can perform duets or trios with themselves, without engendering the so-called canned (boringly repetitive and unresponsive) music effect of loop pedals. We describe the technology, based on supervised classification and concatenative synthesis, and then illustrate our approach on solo performances of jazz standards by guitar. We claim this approach opens up new avenues for concert performance
Article
Full-text available
This paper takes a systemic perspective on interactive signal processing and introduces the author's Audible Eco-Systemic Interface (AESI) project. It starts with a discussion of the paradigm of ‘interaction’ in existing computer music and live electronics approaches, and develops following bio-cybernetic principles such as ‘system/ambience coupling’, ‘noise’, and ‘self-organisation’. Central to the paper is an understanding of ‘interaction’ as a network of interdependencies among system components, and as a means for dynamical behaviour to emerge upon the contact of an autonomous system (e.g. a DSP unit) with the external environment (room or else hosting the performance). The author describes the design philosophy in his current work with the AESI (whose DSP component was implemented as a signal patch in KYMA5.2), touching on compositional implications (not only live electronics situations, but also sound installations).
Article
Full-text available
There is small but useful body of research concerning the evaluation of musical interfaces with HCI techniques. In this paper, we present a case study in implementing these techniques; we describe a usability experiment which eval-uated the Nintendo Wiimote as a musical controller, and reflect on the effectiveness of our choice of HCI methodolo-gies in this context. The study offered some valuable results, but our picture of the Wiimote was incomplete as we lacked data concerning the participants' instantaneous musical ex-perience. Recent trends in HCI are leading researchers to tackle this problem of evaluating user experience; we review some of their work and suggest that with some adaptation it could provide useful new tools and methodologies for com-puter musicians.
Article
Full-text available
Seeking new forms of expression in computer music, a small number of laptop composers are braving the challenges of coding music on the fly. Not content to submit meekly to the rigid interfaces of performance software like Ableton Live or Reason, they work with programming languages, building their own custom software, tweaking or writing the programs themselves as they perform. Often this activity takes place within some established language for computer music like SuperCollider, but there is no reason to stop errant minds pursuing their innovations in general scripting languages like Perl. This paper presents an introduction to the field of live coding, of real-time scripting during laptop music performance, and the improvisatory power and risks involved. We look at two test cases, the command-line music of slub utilising, amongst a grab-bag of technologies, Perl and REALbasic, and Julian Rohrhuber's Just In Time library for SuperCollider. We try to give a flavour of an exciting but hazardous world at the forefront of live laptop performance.
Article
Full-text available
This paper describes a new generative software system for music composition. A number of state-based, musical agents traverse a user-created graph. The graph consists of nodes (representing events), connected by edges, with the time between events determined by the physical length of the connecting edge. As the agents encounter nodes they generate musical data. Different node types control the selection of output edges, providing sequential, parallel or random output from a given node. The system deftly balances composer control with the facilitation of complex, emergent compositional structures, difficult to achieve using conventional notation software.
Article
Full-text available
We propose a demonstration of The Wekinator, our soft-ware system that enables the application of machine-learning based music information retrieval techniques to real-time musical performance, and which emphasizes a richer human-computer interaction in the design of ma-chine learning systems.
Article
Full-text available
We describe the design and implementation of a tool to help students learn the art of jazz improvisation. The tool integrates elements of database, AI in the form of automatic melody generation, and human interface design. We describe the philosophy of using several coordinated mini-languages to provide user specifications for various aspects of the tool, including melody and chord representation, styles, melody generation, and other musical knowledge. Keywords: music software, improvisation, jazz, mini-language, human-computer interface
Article
Full-text available
Live music-making using interactive systems is not completely amenable to traditional HCI evaluation metrics such as task-completion rates. In this paper we discuss quantitative and qualitative approaches which provide opportunities to evaluate the music-making interaction, accounting for aspects which cannot be directly measured or expressed numerically, yet which may be important for participants. We present case studies in the application of a qualitative method based on Discourse Analysis, and a quantitative method based on the Turing Test. We compare and contrast these methods with each other, and with other evaluation approaches used in the literature, and discuss factors affecting which evaluation methods are appropriate in a given context.
Conference Paper
Full-text available
In recent years we have seen a proliferation of musical tables. Believing that this is not just the result of a tabletop trend, in this paper we first discuss several of the reasons for which live music performance and HCI in general, and musical instruments and tabletop interfaces in particular, can lead to a fertile two-way cross-pollination that can equally benefit both fields. After that, we present the reacTable, a musical instrument based on a tabletop interface that exemplifies several of these potential achievements. Author Keywords
Conference Paper
Full-text available
This paper describes an automated computer improviser which attempts to follow and improvise against the frequencies and timbres found in an incoming audio stream. The improviser is controlled by an ever changing set of sequences which are generated by analysing the incoming audio stream (which may be a feed from a live musician) for its physical and musical properties such as pitch and amplitude. Control data from these sequences is passed to the synthesis engine where it is used to configure sonic events. These sonic events are generated using sound synthesis algorithms designed by an unsupervised genetic algorithm where the fitness function compares snapshots of the incoming audio to snapshots of the audio output of the evolving synthesizers in the spectral domain in order to drive the population to match the incoming sounds. The sound generating performance system and sound designing evolutionary system operate in real time in parallel to produce an interactive stream of synthesised sound. An overview of related systems is provided, this system is described then some preliminary results are presented.
Conference Paper
Full-text available
This paper describes a jam session system that enables a human player to interplay with virtual players which can imitate the player personality models of various human players. Previous sys- tems have parameters that allow some alteration in the way virtual players react, but these systems cannot imitate human personalities. Our system can obtain three kinds of player personality models from a MIDI recording of a session in which that player participated - a reaction model, a phrase model, and a groove model. The reaction model is the characteristic way that a player reacts to other players, and it can be statistically learned from the relationship between the MIDI data of music the player listens to and the MIDI data of music improvised by that player. The phrase model is a set of player's characteristic phrases; it can be acquired through musical segmentation of a MIDI session recording by using Voronoi dia- grams on a piano-roll. The groove model is a model that generates onset time deviation; it can be acquired by using a hidden Markov model. Experimental results show that the personality models of any player participating in a guitar trio session can be derived from a MIDI recording of that session.
Article
Full-text available
Shimon is an improvisational robotic marimba player that listens to human co-players and responds musically and choreographically based on analysis of musical input. The paper discusses the robot’s mechanical and motion control and presents a novel interactive improvisation system based on the notion of physical gestures. Our system uses anticipatory action to enable real-time improvised synchronization with the human player. It was implemented on a full-length human-robot Jazz duet, displaying coordinated melodic and rhythmic human-robot joint improvisation. We also describe a study evaluating the effect of visual cues and embodiment on one of our call-and-response improvisation module. Our findings indicate that synchronization is aided by visual contact when uncertainty is high. We find that visual coordination is more effective for synchronization in slow sequences compared to faster sequences, and that occluded physical presence may be less effective than audio-only note generation.
Article
Full-text available
The improving proficiency of offline and online score matching algorithms have made possible many new applications in digital audio editing, audio database construction, real-time performance, and accompanying systems. Offline matching uses the complete performance to estimate the correspondence between audio data and symbolic score. It can be viewed as an index into the performance, allowing random access to a recording and listeners to begin at any location in a composition to link visual score representations with audio or to coordinate animation with prerecorded audio. Offline score matching enables digital editing and post-processing of music, that often requires that the location of a particular note in an audio file to be tuned, balanced, or tweaked in various ways. Accompaniment systems are used by several musicians to make practice more enjoyable and instructive, and to capture a deeper understanding of musical aesthetics.
Article
Full-text available
The evaluation plays a significant role in the process of designing digital musical instruments (DMI). Models are representations of systems or artifacts that provide a means of reflecting upon the design or behavior of a system. Taxonomies are often used in human-computer interaction (HCI) as a means of categorizing methods of design or evaluation according to characteristics that they have in common. The development of performance practice and a dedicated instrumental repertoire along with the evolution of a DMI allow performers and composers to become participants in shaping the function, form, and sound of the instrument. Performers are the only people who can provide feedback on an instrument's functioning in the context for which it was ultimately intended, that of live music making. It is often necessary to probe interaction designs at the task level, particularly in order to evaluate two possible options for a given design, or to probe the mental model that a user is constructing of a given interaction task.
Conference Paper
Full-text available
We describe a multi-agent architecture for an improvization oriented musician-machine interaction system that learns in real time from human performers. The improvization kernel is based on sequence modeling and statistical learning. The working system involves a hybrid architecture using two popular composition/perfomance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The system is capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvization practices, the statistical modeling tools and the concurrent agent architecture are presented. Finally, a prospective Reinforcement Learning scheme for enhancing the system's realism is described.
Article
Full-text available
In this vision paper I will discuss a few questions concerning the use of generative processes in composition and automatic music creation. Why do I do it, and does it really work? I discuss the problems involved, focusing on the use of interactivity, and describe the use of interactive evolution as a way of introducing interactivity in composition. The installation MutaSynth is presented as an implementation of this idea.
Conference Paper
Full-text available
This paper describes GenJam, a genetic algorithm-based model of a novice jazz musician learning to improvise. GenJam maintains hierarchically related populations of melodic ideas that are mapped to specific notes through scales suggested by the chord progression being played. As GenJam plays its solos over the accompaniment of a standard rhythm section, a human mentor gives real-time feedback, which is used to derive fitness values for the individual measures and phrases. GenJam then applies various genetic operators to the populations to breed improved generations of ideas.
Article
This article discusses the design and evaluation of an artificial agent for collaborative musical free improvisation. The agent provides a means to investigate the underpinnings of improvisational interaction. In connection with this general goal, the system is also used here to explore the implementation of a collaborative musical agent using a specific robotics architecture called Subsumption. The architecture of the system is explained, and its evaluation in an empirical study with expert improvisors is discussed. A follow-up study using a second iteration of the system is also presented. The system design and connected studies bring together Subsumption robotics, ecological psychology, and musical improvisation, and they contribute to an empirical grounding of an ecological theory of improvisation.
Chapter
A Live Algorithm is an autonomous machine that interacts with musicians in an improvised setting. This chapter outlines perspectives on Live Algorithm research, offering a high level view for the general reader, as well as more detailed and specialist analysis. The study of Live Algorithms is multi-disciplinary in nature, requiring insights from (at least) Music Technology, Artificial Intelligence, Cognitive Science, Musicology and Performance Studies. Some of the most important issues from these fields are considered. A modular decomposition and an associated set of wiring diagrams is offered as a practical and conceptual tool. Technical, behavioural, social and cultural contexts are considered, and some signposts for future Live Algorithm research are suggested. © Springer-Verlag Berlin Heidelberg 2012. All rights are reserved.
Chapter
Music is a pattern of sounds in time. A swarm is a dynamic pattern of individuals in space. The structure of a musical composition is shaped in advance of the performance, but the organization of a swarm is emergent, without pre-planning. What use, therefore, might swarms have in music?
Conference Paper
Graphical sequencers have limits in their use as live performance tools. It is hypothesized that those limits can be ovecome through live coding or text-based interfaces. Using a general purpose programming language has advantages over that of a domain-specific language. However, a barrier for a musician wanting to use a general purpose language for computer music has been the lack of high-level music-specific abstractions designed for realtime manipulation, such as those for time. A library for Haskell was developed to give computer musicians a high-level interface for a heterogenous output enviroment.
Article
Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.
Article
The author discusses his computer music composition, Voyager, which employs a computer-driven, interactive & virtual improvising orchestra that analyzes an improvisor's performance in real time, generating both complex responses to the musician's playing and independent behavior arising from the program's own internal processes. The author contends that notions about the nature and function of music are embedded in the structure of software-based music systems and that interactions with these systems tend to reveal characteristics of the community of thought and culture that produced them. Thus, Voyager is considered as a kind of computer music-making embodying African-American aesthetics and musical practices.
Article
The idea of music that somehow plays itself, or emerges from a nonhuman intelligence, is a common, transculturally present theme in folklore, science, and art. Over the centuries, this notion has been expressed through the development of various technological means. This paper explores aspects of my ongoing encounter with computers in improvised music, as exemplified by my most recent interactive computer music compositions. These works involve extensive interaction between improvising musicians and computer music-creating programs at the performance (“real-time”) level. In both theory and practice, this means that both human musicians and computer programs play central organizing and structuring roles in any performance of these works. This paper seeks to explore aesthetic, philosophical, cultural and social implications of this work. In addition, the nature and practice of improvisation itself will be explored, since an understanding of this ubiquitous musical activity is essential to establishing the cultural and historical context of the work.
Article
Machine listening and machine learning are critical aspects in seeking a heightened musical agency for new interactive music systems. This paper details LL (ListeningLearning), a project which explored a number of novel techniques in this vein. Feature adaptation using histogram equali-sation from computer vision provided an alternative nor-malization scheme. Local performance states were clas-sified by running multiple k-means clusterers in parallel based on statistical summary feature vectors over windows of feature frames. Two simultaneous beat tracking pro-cesses detected larger scale periodicity commensurate with bars, and local IOI information, reconciling these. Further, a measure of 'free' playing as against metrically precise playing was explored. These various processes mapped through to control a number of live synthesis and process-ing elements, in a case study combining a human percus-sionist and machine improvisation system. A further project has subsequently adapted core parts of the work as a Max/MSP external, first used for Sam Hayden's violectra project, and now released in conjunction with disclosure of code for this paper.
Conference Paper
In this paper, we present soft computing tools and techniques aimed at realizing musical instruments that learn. Specifically we explore applications of neural network and fuzzy logic techniques to the design of instruments that form highly personalized relationships with their users through self-adaptation. We demonstrate techniques for adapting sensor arrays and techniques for realizing highly expressive real-time sound synthesis algorithms
Conference Paper
Shimon is an autonomous marimba-playing robot designed to create interactions with human players that lead to novel musical outcomes. The robot combines music perception, interaction, and improvisation with the capacity to produce melodic and harmonic acoustic responses through choreographic gestures. We developed an anticipatory action framework, and a gesture-based behavior system, allowing the robot to play improvised Jazz with humans in synchrony, fluently, and without delay. In addition, we built an expressive non-humanoid head for musical social communication. This paper describes our system, used in a performance and demonstration at the CHI 2010 Media Showcase.
Article
Several modular designs for the creative composition of live algorithms are presented. An important aspect of behavioral objects is that, consisting merely of information, they can easily be shared among a community of musical users, along with audio samples, MIDI data, and other types of musical information. Exchangeable modular software elements such as Max/MSP objects, code libraries, and VST plug-ins are the existing of behavioral objects, exhibiting modest autonomy but potentially immense complexity. The live algorithms framework advocates the use of any system that is capable of responsive, complex, dynamical behaviors as a generative musical mechanism, exhibiting the rich musical potential of harnessing such a diverse set of systems. The abstract dynamical system is used to drive a generative music system, which has been handcoded. Discrete control was achieved by allowing one output of the network to control the rate of a clock used to trigger musical events.
Conference Paper
One of the goals of artificial life in the arts is to develop systems that exhibit creativity. We argue that creativity {it per se} is a confusing goal for artificial life systems because of the complexity of the relationship between the system, its designers and users, and the creative domain. We analyse this confusion in terms of factors affecting individual human motivation in the arts, and the methods used to measure the success of artificial creative systems. We argue that an attempt to understand emph{creative agency} as a common thread in nature, human culture, human individuals and computational systems is a necessary step towards a better understanding of computational creativity. We define creative agency with respect to existing theories of creativity and consider human creative agency in terms of human evolution. We then propose how creative agency can be used to analyse the creativity of computational systems in artistic domains. @InProceedings{bown_et_al:DSP:2009:2216, author = {Oliver Bown and Jon McCormack}, title = {Creative Agency: A Clearer Goal for Artificial Life in the Arts}, booktitle = {Computational Creativity: An Interdisciplinary Approach}, year = {2009}, editor = {Margaret Boden and Mark D'Inverno and Jon McCormack}, number = {09291}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2009/2216}, annote = {Keywords: Creativity, agency} }
Article
This paper introduces a new domain for believable agents (BA) and presents novel methods for dealing with the unique challenges that arise therein. The domain is providing improvisational companionship to a specific musician/user, trading real-time solos with them in the jazz/blues setting. The ways in which this domain both conflicts with and benefits from traditional BA and interactive computer music system approaches are discussed. Band-out-of-the-Box (BoB), an agent built for this domain, is also presented, most novel in that unsupervised machine learning techniques are used to automatically configure BoB's aesthetic musical sense to that of its specific user/musician.
Algorithmic Interfaces for Collaborative Improvisation
  • Shelly Knotts
Knotts, Shelly. 2016. "Algorithmic Interfaces for Collaborative Improvisation." In Proceedings of International Conference on Live Interfaces, Sussex.
Computer Music Report on an International Project. Ottowa: Canadian Commission for UNESCO
  • Joel Chadabe
Chadabe, Joel. 1980. "Solo: A Specific Example of Realtime Performance." Computer Music Report on an International Project. Ottowa: Canadian Commission for UNESCO.
The Synthesizer: a Comprehensive Guide to Understanding, Programming, Playing, and Recording the Ultimate Electronic Music Instrument
  • Mark Vail
Vail, Mark. 2014. The Synthesizer: a Comprehensive Guide to Understanding, Programming, Playing, and Recording the Ultimate Electronic Music Instrument. Oxford: Oxford University Press.
Enhancing Individual Creativity with Interactive Musical Reflexive Systems.” In Musical Creativity: Multidisciplinary Research in Theory and Practice
  • François Pachet
Heroic versus Collaborative AI for the Arts
  • Inverno
  • Jon Mark
  • Mccormack
d'Inverno, Mark, Jon McCormack, et al. 2015. Heroic versus Collaborative AI for the Arts." In Twenty-fourth International Joint Conference on Artificial Intelligence. Buenos Aires: AAAI Press.
Designing Improvisational Interfaces
  • Jon Mccormack
  • Mark Inverno
McCormack, Jon, and Mark d'Inverno. 2016. "Designing Improvisational Interfaces." In Proceedings of the 7th Computational Creativity Conference (ICCC 2016). Universite Pierre et Marie Curie, Paris.