The use of artificial neural networks and deep learning systems to generate visual artistic expressions has become common in recent times. However, musical neural networks have not been applied to the same extent. While image-generation systems often use convolutional networks, musical generation systems rely on, less well-developed, recurrent neural networks (RNNs) that are trained to model sequences of data through time. RNNs usually apply special artificial neurons known as Long Short-Term Memory (LSTM) cells that can store information over several time-steps and learn when and how to update their memory during training. As RNNs model sequences and include a kind of memory, they are very applicable to the temporal structure of music, where patterns may be regularly repeated. In musical applications, these networks are usually applied as sequence generators; that is, given a sequence of notes, the network generates a possible next note. In this talk, I will discuss current designs for RNNs and the latest applications in Google’s Magenta project, and in our own Neural Touch-Screen Ensemble which has been developed at the University of Oslo, Department of Informatics. Both of these projects are notable for focusing on interactive applications of musical networks. Magenta implements a call-response improvisation system that allows performers to probe the musical affordances of an RNN. The Neural Touch-Screen Ensemble simulates an ensemble response to a single live performer by encoding data captured over several years of collaborative iPad improvisations. These interactive musical AI systems point to future possibilities for integrating musical networks into musical performance and production where they could be seen as ”intelligent instruments” that assist and enhance musicians and casual music makers alike.
Figures - uploaded by
Charles Patrick MartinAuthor contentAll figure content in this area was uploaded by Charles Patrick Martin
Content may be subject to copyright.