Charles Patrick Martin

Charles Patrick Martin
Australian National University | ANU · Research School of Computer Science

PhD (Computer Science), MMus, BSc (Hons)

About

86
Publications
11,123
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
348
Citations
Citations since 2017
61 Research Items
298 Citations
2017201820192020202120222023020406080
2017201820192020202120222023020406080
2017201820192020202120222023020406080
2017201820192020202120222023020406080
Introduction
I'm a specialist in computer music, human-computer interaction and musical AI. My research is focussed on developing new ways to make music on touchscreen devices such as smartphones and tablets. I'm particularly interested in enhancing the feeling of connection between members of a musical ensemble using computer instruments, to do this I use data science and machine learning to analyse performances and interact with the musicians.
Additional affiliations
April 2019 - present
Australian National University
Position
  • Lecturer
August 2016 - present
University of Oslo
Position
  • PostDoc Position
August 2014 - June 2016
Australian National University
Position
  • Tutor
Education
January 2013 - June 2016
Australian National University
Field of study
  • Computer Science

Publications

Publications (86)
Article
Full-text available
Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments. Our solution is rooted in embodied AI and comprise...
Chapter
This paper presents a new interactive application that can generate music according to a user’s preferences inspired by the process of biological evolution. The application composes sets of songs that the user can choose from as a basis for the algorithm to evolve new music. By selecting preferred songs over successive generations, the application...
Chapter
When using generative deep neural networks for creative applications it is common to explore multiple sampling approaches. This sampling stage is a crucial step, as choosing suitable sampling parameters can make or break the realism and perceived creative merit of the output. The process of selecting the correct sampling parameters is often task-sp...
Article
Robots operating in the real world will experience a range of different environments and tasks. It is essential for the robot to have the ability to adapt to its surroundings to work efficiently in changing conditions. Evolutionary robotics aims to solve this by optimizing both the control and body (morphology) of a robot, allowing adaptation to in...
Preprint
Full-text available
This paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe t...
Preprint
Full-text available
This work examines how head-mounted AR can be used to build an interactive sonic landscape to engage with a public sculpture. We describe a sonic artwork, "Listening To Listening", that has been designed to accompany a real-world sculpture with two prototype interaction schemes. Our artwork is created for the HoloLens platform so that users can hav...
Preprint
Full-text available
The popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications. This work outlines the implementation of one such application in the form of an assistance tool designed for live improvisational performan...
Preprint
Full-text available
We present and evaluate a novel interface for tracking ensemble performances on touch-screens. The system uses a Random Forest classifier to extract touch-screen gestures and transition matrix statistics. It analyses the resulting gesture-state sequences across an ensemble of performers. A series of specially designed iPad apps respond to this real...
Preprint
Full-text available
In 2009 the cross artform group, Last Man to Die, presented a series of performances using new interfaces and networked performance to integrate the three artforms of its members (actor, Hanna Cormick, visual artist, Benjamin Forster and percussionist, Charles Martin). This paper explains our artistic motivations and design for a computer vision su...
Preprint
Full-text available
This paper describes the development of an Apple iPhone based mobile computer system for vibraphone and its use in a series of the author's performance projects in 2011 and 2012. This artistic research was motivated by a desire to develop an alternative to laptop computers for the author's existing percussion and computer performance practice. The...
Preprint
Full-text available
This paper describes Strike on Stage, an interface and corresponding audio-visual performance work developed and performed in 2010 by percussionists and media artists Chi-Hsia Lai and Charles Martin. The concept of Strike on Stage is to integrate computer visuals and sound into an improvised percussion performance. A large projection surface is pos...
Chapter
The choices of neural network model and data representation, a mapping between musical notation and input signals for a neural network, have emerged as a major challenge in creating convincing models for melody generation. Music generation can inspire creativity in artists and the general public, but choosing a proper data representation is complic...
Preprint
Full-text available
Sound and movement are closely coupled, particularly in dance. Certain audio features have been found to affect the way we move to music. Is this relationship between sound and movement something which can be modelled using machine learning? This work presents initial experiments wherein high-level audio features calculated from a set of music piec...
Article
The widespread adoption of mobile devices, such as smartphones and tablets, has made touchscreens a common interface for musical performance. Although new mobile music instruments have been investigated from design and user experience perspectives, there has been little examination of the performers' musical output. In this work, we introduce a con...
Conference Paper
Full-text available
In acoustic instruments, sound production relies on the interaction between physical objects. Digital musical instruments, on the other hand, are based on arbitrarily designed action-sound mappings. This paper describes the ongoing exploration of an empirically-based approach for simulating guitar playing technique when designing the mappings of 'a...
Preprint
Full-text available
Robots operating in the real world will experience a range of different environments and tasks. It is essential for the robot to have the ability to adapt to its surroundings to work efficiently in changing conditions. Evolutionary robotics aims to solve this by optimizing both the control and body (morphology) of a robot, allowing adaptation to in...
Article
Full-text available
Machine-learning models of music often exist outside the worlds of musical performance practice and abstracted from the physical gestures of musicians. In this work, we consider how a recurrent neural network (RNN) model of simple music gestures may be integrated into a physical instrument so that predictions are sonically and physically entwined w...
Chapter
Full-text available
Creating robust robot platforms that function in the real world is a difficult task. Adding the requirement that the platform should be capable of learning, from nothing, ways to generate its own movement makes the task even harder. Evolutionary Robotics is a promising field that combines the creativity of evolutionary optimization with the real-wo...
Conference Paper
In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings...
Conference Paper
We introduce a method for identifying short-duration reusable motor behaviors, which we call early-life options, that allow robots to perform well even in the very early stages of their lives. This is important when agents need to operate in environments where the use of poor-performing policies (such as the random policies with which they are typi...
Conference Paper
Full-text available
This paper describes a new intelligent interactive instrument, based on an embedded computing platform, where deep neural networks are applied to interactive music generation. Even though using neural networks for music composition is not uncommon, a lot of these models tend to not support any form of user interaction. We introduce a self-contained...
Conference Paper
Full-text available
This paper is about creating digital musical instruments where a predictive neural network model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), we suggest that predicting future control data from the user and precise temporal information can lead to new and interesting interactive possibilities....
Conference Paper
Full-text available
Generating convincing music via deep neural networks is a challenging problem that shows promise for many applications including interactive musical creation. One part of this challenge is the problem of generating convincing accompaniment parts to a given melody, as could be used in an automatic accompaniment system. Despite much progress in this...
Preprint
Full-text available
Robots are used in more and more complex environments, and are expected to be able to adapt to changes and unknown situations. The easiest and quickest way to adapt is to change the control system of the robot, but for increasingly complex environments one should also change the body of the robot -- its morphology -- to better fit the task at hand....
Preprint
Full-text available
This paper is about creating digital musical instruments where a predictive neural network model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), we suggest that predicting future control data from the user and precise temporal information can lead to new and interesting interactive possibilities....
Chapter
Full-text available
The complexity of a legged robot’s environment or task can inform how specialised its gait must be to ensure success. Evolving specialised robotic gaits demands many evaluations—acceptable for computer simulations, but not for physical robots. For some tasks, a more general gait, with lower optimization costs, could be satisfactory. In this paper,...
Chapter
Full-text available
Musicians often use tools such as loop-pedals and multitrack recorders to assist in improvisation and songwriting, but these tools generally don’t proactively contribute aspects of the musical performance. In this work, we introduce an interactive audio looper that predicts a loop’s harmony, and constructs an accompaniment automatically using conca...
Preprint
Full-text available
The complexity of a legged robot's environment or task can inform how specialised its gait must be to ensure success. Evolving specialised robotic gaits demands many evaluations - acceptable for computer simulations, but not for physical robots. For some tasks, a more general gait, with lower optimization costs, could be satisfactory. In this paper...
Preprint
The widespread adoption of mobile devices, such as smartphones and tablets, has made touchscreens a common interface for musical performance. New mobile musical instruments have been designed that embrace collaborative creation and that explore the affordances of mobile devices, as well as their constraints. While these have been investigated from...
Preprint
Full-text available
Gaining a better understanding of how and what machine learning systems learn is important to increase confidence in their decisions and catalyze further research. In this paper, we analyze the predictions made by a specific type of recurrent neural network, mixture density RNNs (MD-RNNs). These networks learn to model predictions as a combination...
Chapter
The formal evaluation of new interfaces for musical expression (NIMEs) in their use by ensembles of musicians is a challenging problem in human-computer interaction (HCI). NIMEs are designed to support creative expressions that are often improvised and unexpected. In the collaborative setting of a musical ensemble, interactions are complex and it c...
Conference Paper
Full-text available
For robots to handle the numerous factors that can affect them in the real world, they must adapt to changes and unexpected events. Evolutionary robotics tries to solve some of these issues by automatically optimizing a robot for a specific environment. Most of the research in this field, however, uses simplified representations of the robotic syst...
Conference Paper
Full-text available
This paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe t...
Conference Paper
Full-text available
This article describes the design and construction of a collection of digitally-controlled augmented acoustic guitars, and the use of these guitars in the installation Sverm-Resonans. The installation was built around the idea of exploring 'in-verse' sonic microinteraction, that is, controlling sounds through the micromotion observed when trying no...
Preprint
Full-text available
Musical performance requires prediction to operate instruments, to perform in groups and to improvise. We argue, with reference to a number of digital music instruments (DMIs), including two of our own, that predictive machine learning models can help interactive systems to understand their temporal context and ensemble behaviour. We also discuss h...
Preprint
Full-text available
For robots to handle the numerous factors that can affect them in the real world, they must adapt to changes and unexpected events. Evolutionary robotics tries to solve some of these issues by automatically optimizing a robot for a specific environment. Most of the research in this field, however, uses simplified representations of the robotic syst...
Preprint
Full-text available
Evolutionary robotics has aimed to optimize robot control and morphology to produce better and more robust robots. Most previous research only addresses optimization of control, and does this only in simulation. We have developed a four-legged mammal-inspired robot that features a self-reconfiguring morphology. In this paper, we discuss the possibi...
Preprint
Full-text available
We introduce a new self-contained and self-aware interface for musical expression where a recurrent neural network (RNN) is integrated into a physical instrument design. The system includes levers for physical input and output, a speaker system, and an integrated single-board computer. The RNN serves as an internal model of the user's physical inpu...
Article
Full-text available
Robots need to be able to adapt to complex and dynamic environments for widespread adoption, and adapting the body might yield more flexible and robust robots. Previous work on dynamic robot morphology has focused on simulation, combining simple modules, or switching between locomotion modes. This paper presents an alternative approach: automatic s...
Preprint
Full-text available
Robots need to be able to adapt to complex and dynamic environments for widespread adoption, and adapting the body might yield more flexible and robust robots. Previous work on dynamic robot morphology has focused on simulation, combining simple modules, or switching between locomotion modes. This paper presents an alternative approach: automatic s...
Chapter
Full-text available
RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJa...
Preprint
Full-text available
RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to theirshort improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJam...
Presentation
Full-text available
The use of artificial neural networks and deep learning systems to generate visual artistic expressions has become common in recent times. However, musical neural networks have not been applied to the same extent. While image-generation systems often use convolutional networks, musical generation systems rely on, less well-developed, recurrent neur...
Presentation
Full-text available
A short overview of MicroJam, our social music-making app, showing the motivations, interface design, as well as musical AI functionality.
Article
Full-text available
This article describes how percussive interaction informed the design, development, and deployment of a series of touchscreen digital musical instruments for ensembles. Percussion has previously been defined by techniques for exploring and interacting with instruments, rather than by the instruments themselves. Percussionists routinely co-opt unusu...
Conference Paper
Full-text available
For many, the pursuit and enjoyment of musical performance goes hand-in-hand with collaborative creativity, whether in a choir, jazz combo, orchestra, or rock band. However, few musical interfaces use the affordances of computers to create or enhance ensemble musical experiences. One possibility for such a system would be to use an artificial neura...
Conference Paper
Full-text available
Touch-screen musical performance has become commonplace since the widespread adoption of mobile devices such as smartphones and tablets. However, mobile digital musical instruments are rarely designed to emphasise collab-orative musical creation, particularly when it occurs between performers who are separated in space and time. In this article, we...
Conference Paper
Full-text available
MicroJam is a mobile app for sharing tiny touch-screen performances. Mobile applications that streamline creativity and social interaction have enabled a very broad audience to develop their own creative practices. While these apps have been very successful in visual arts (particularly photography), the idea of social music-making has not had such...
Research
Full-text available
A demo of MicroJam; an experimental app for sharing tiny touch-screen musical performances.
Conference Paper
Full-text available
PhaseRings for iPad Ensemble and Ensemble Director Agent is an improvised musical work exploring the use of dynamic touch-screen instruments, tracked by a gesture-classifying agent, to enhance the creativity of an ensemble of performers. The PhaseRings app has been designed specifically for ensembles to create expressive music with simple percussiv...
Conference Paper
Full-text available
In this paper we ask whether machine learning can apply to musical ensembles as well as it does to the individual musical interfaces that are frequently demonstrated at NIME and CHI. While using machine learning to map individual gestures and sensor data to musical output is becoming a major theme of computer music research, these techniques are on...
Conference Paper
Full-text available
We present the results of two controlled studies of free-improvised ensemble music-making on touch-screens. In our system, updates to an interface of harmonically-selected pitches are broadcast to every touch-screen in response to either a performer pressing a GUI button, or to interventions from an intelligent agent. In our first study, analysis o...
Thesis
Full-text available
This thesis concerns the making and performing of music with new digital musical instruments (DMIs) designed for ensemble performance. While computer music has advanced to the point where a huge variety of digital instruments are common in educational, recreational, and professional music-making, these instruments rarely seek to enhance the ensembl...
Conference Paper
Full-text available
The difficulties of evaluating DMIs (digital musical instruments), particularly those used by ensembles of musicians, are well-documented. We propose a methodology of rehearsal-as-research to study free-improvisation by ensembles of DMI performers. Sessions are structured to mirror established practices for training in free-improvisation and to all...
Chapter
Full-text available
Musical performances with touch-screen devices can be recorded by capturing a log of touch interactions. This object can serve as an archive or as a basis for other representations of the musical work. This chapter presents a protocol for recording ensemble touch-screen performances and details the processes for generating visualisations, gestural...
Thesis
Full-text available
This thesis seeks to articulate a performer's perspective of the interactions between percussion and computer in performance. A selection of compositions for percussion and computer will be used to explain how understanding the role of the computer can inform the player's technical and musical choices and is vital to convey a cohesive performance....