Conference PaperPDF Available

MARIE: Monochord-Aerophone Robotic Instrument Ensemble

Authors:

Abstract and Figures

The Modular Electro-Acoustic Robotic Instrument System (MEARIS) represents a new type of hybrid electroacoustic-electromechanical instrument model. Monochord-Aerophone Robotic Instrument Ensemble (MARIE), the first realization of a MEARIS, is a set of interconnected monochord and cylindrical aerophone robotic musical instruments created by Expressive Machines Musical Instruments (EMMI). MARIE comprises one or more matched pairs of Automatic Monochord Instruments (AMI) and Cylindrical Aerophone Robotic Instruments (CARI). Each AMI and CARI is a self-contained, independently operable robotic instrument with an acoustic element, a control system that enables automated manipulation of this element, and an audio system that includes input and output transducers coupled to the acoustic element. Each AMI-CARI pair can also operate as an interconnected hybrid instrument, allowing for effects that have heretofore been the domain of physical modeling technologies, such as a "plucked air column" or "blown string." Since its creation, MARIE has toured widely, performed with dozens of human instrumentalists, and has been utilized by nine composers in the realization of more than twenty new musical works.
Content may be subject to copyright.
MARIE: Monochord-Aerophone Robotic Instrument
Ensemble
Troy Rogers
Expressive Machines
Musical Instruments
Duluth, MN, USA
troy@expressivemachines.com
Steven Kemper
Mason Gross School of the Arts
!Rutgers, The State University of New
Jersey
New Brunswick, NJ, USA
steven.kemper@rutgers.edu
Scott Barton
Department of Humanities and Arts
Worcester Polytechnic Institute
Worcester, MA 01609
sdbarton@wpi.edu
ABSTRACT
The Modular Electro-Acoustic Robotic Instrument System
(MEARIS) represents a new type of hybrid electroacoustic-
electromechanical instrument model. Monochord-Aerophone
Robotic Instrument Ensemble (MARIE), the first realization of a
MEARIS, is a set of interconnected monochord and cylindrical
aerophone robotic musical instruments created by Expressive
Machines Musical Instruments (EMMI). MARIE comprises one or
more matched pairs of Automatic Monochord Instruments (AMI)
and Cylindrical Aerophone Robotic Instruments (CARI). Each AMI
and CARI is a self-contained, independently operable robotic
instrument with an acoustic element, a control system that enables
automated manipulation of this element, and an audio system that
includes input and output transducers coupled to the acoustic
element. Each AMI-CARI pair can also operate as an interconnected
hybrid instrument, allowing for effects that have heretofore been the
domain of physical modeling technologies, such as a "plucked air
column" or "blown string." Since its creation, MARIE has toured
widely, performed with dozens of human instrumentalists, and has
been utilized by nine composers in the realization of more than
twenty new musical works.
Author Keywords
musical robots, robotic musical instruments, plucked string
instruments, aerophones, hybrid instruments
ACM Classification
H.5.5 [Information Interfaces and Presentation] Sound and
Music Computing, H.5.1 [Information Interfaces and
Presentation] Multimedia Information Systems.
1. INTRODUCTION
The Modular Electro-Acoustic Robotic Instrument System
(MEARIS) represents a new type of hybrid electroacoustic-
electromechanical instrument model in which individual
robotic musical instruments can function as tunable acoustic
filters in an interconnected multi-module signal chain.
Monochord-Aerophone Robotic Instrument Ensemble (MARIE),
the first realization of a MEARIS, comprises one or more
matched pairs of Automatic Monochord Instruments (AMI) and
Cylindrical Aerophone Robotic Instruments (CARI). In designing
MARIE, we employed the MEARIS paradigm to create an
ensemble of versatile robotic musical instruments with maximal
registral, timbral, and expressive ranges that are portable,
reliable, and user-friendly for touring musicians.
MARIE was commissioned in 2010 by bassoonist Dana
Jessen and saxophonist Michael Straus of the Electro Acoustic
Reed (EAR) Duo, and designed and built by Expressive
Machines Musical instruments (EMMI)
1
for a set of tours
through the US and Europe. The first prototype of the
instrument was created in early 2011 [5], and has since been
field tested and refined through many performances with the
EAR Duo (Figure 1) and numerous other composers and
performers.
Figure 1. EAR Duo performs with MARIE at the Logos
Foundation in Ghent, Belgium.
2. RELATED WORK
The contemporary field of musical robotics spans a wide range of
research and creative activities [6]. Within this diverse field, we can
distinguish between 1) emulative machines that help researchers
better understand and/or replicate human performers, and 2) inventive
machines developed by composer-builders seeking new vehicles for
musical expression. EMMI’s work is largely focused on this second
category.
Numerous existing robotic instruments influenced MARIE’s
design, including those created by Trimpin, Roland Olbeter, Eric
Singer, and most significantly, Godfried-Willem Raes. In addition
to musical robotics, MARIE is influenced by the parallel field
of active control of acoustic musical instruments, as described
by Berdahl, Niemeyer, and Smith at CCRMA [1].
1
www.expressivemachines.com
Permission to make digital or hard copies of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. To copy otherwise, to republish, to post on
servers or to redistribute to lists, requires prior specific permission and/or a fee.
NIME’15, May 31-June 3, 2015, Louisiana State University, Baton Rouge, LA.
Copyright remains with the author(s).
AMI was conceived as an updated version of EMMI's first robotic
string instrument, PAM (Poly-tangent Automatic multi-Monochord),
which itself was influenced by LEMUR's GuitarBot [14], Trimpin's
Jackbox [6], and Raes' <Hurdy> [11]. AMI’s updated features also
draw upon ideas from Roland Olbeter's Fast Blue Air [10] and Raes'
<Aeio> [7]. James McVay's MechBass [8] as well as Raes'
<SynchroChord> and <Zi> [11] are notable robotic string
instruments that have been created between the development of
our initial prototype (early 2011) and the present.
CARI builds upon a number of robotic aerophones that have
previously been developed for both research and creative purposes.
Instruments such as the WF-4RV flutist and WAS-1 saxophonist
robots of Waseda University's Takanishi Lab [15, 16]; Roger
Dannenberg's robotic bagpipe player McBlare [3]; and the robotic
clarinet created by NICTA and UNSW's Music Acoustics
Laboratory [13] have been inspired by human performance models
and thus exemplify the emulative category described above. Though
these instruments helped shape our approach, CARI is primarily
influenced by the numerous inventive monophonic aerophones
developed by Raes including <AutoSax>, <Korn>, and <Ob>
[7]. Since the creation of the first CARI prototype in 2011,
Raes has developed several other related aerophone
instruments, including his automated clarinet <Klar> [11]. His
electroacoustic organ <Hybr> [12] shares acoustic features with
CARI as both are cylindrical air columns set into resonance by
an audio-rate driver.
3. CONCEPTUAL ORIGINS OF MARIE:
THE MEARIS PARADIGM
3.1 Inspiration
The MEARIS concept was inspired by Raes' automated
monophonic aerophone instruments, which operate as
automatically tunable acoustic filters. While the vast majority
of prior robotic instruments focus upon note actuation (i.e., act
as impulse generators), the electromechanical control systems
of Raes’ aerophones alter the acoustic resonances, which shape
the source sounds over time. With the MEARIS paradigm, we
seek to expand upon this work in order to further explore the
expressive possibilities enabled by connecting robotic impulse
and filtering “modules” in a variety of ways, as one would do
with a modular synthesizer.
3.2 Elements of a MEARIS
Figure 2 displays the basic elements of a MEARIS module. At
the center of the module is an acoustic element: a resonant body
(vibrating string, air column, membrane, etc.) that is activated,
modified, and sensed by the control and audio systems. The
control system communicates via the MIDI protocol and drives
automated mechanisms that excite or alter the acoustic element.
The audio system uses input transducers to force the acoustic
element into resonance, and output transducers to capture the
vibrations of the resonant body. The audio system can also
include automated mixing circuitry and onboard analog or
digital effects. Signals can be routed from one MEARIS
module to another to create rich hybrid instrumental timbres.
To make the sound producing/altering gestures as visually
salient as possible, instruments actions are amplified through
cameras and projection, as well as through onboard lighting
displays that illuminate form and action.
Figure 2. Funtional diagram of a MEARIS, with acoustic
element (center), control system (green), audio system
(blue), and visual elements (purple).
4. MARIE DESIGN
MARIE represents the first realization of a MEARIS. Each of
MARIE’s modules (each AMI and each CARI) contains a
resonant acoustic element that can function as a filter and a
control system with automated electromechanical actuators that
excite, tune, and dampen this acoustic element. Each module
also features an audio system with input and output transducers
and automated input and output matrices.
4.1 Instruments
4.1.1 AMI
AMI is a robotic monochord instrument with automated
mechanisms to articulate the string and alter its vibrating
length; an electromagnetic bowing mechanism (input
transducer); a pickup (output transducer); automated analog
mixing and effects circuitry; and programmable LED display.
AMI’s acoustic element is an electric guitar string that is
manually tunable with a standard guitar tuning machine.
Produced sound is transduced by a flexible contact microphone
and sent to either an on-board or an external amplifier/speaker.
The frame is divided vertically into two equal halves, each of
which can house a single instance of AMI. Figure 3 diagrams
AMI’s acoustic, control, audio, and visual elements.
Figure 3. Functional diagram of AMI.
4.1.2 CARI
CARI is a cylindrical aerophone modeled on the
clarinet. Rather than retrofitting an existing acoustic instrument,
we re-imagined the instrument itself. Because an automatic
instrument does not need to accommodate the hands of human
performers, the encumbrances of traditional keying
mechanisms can be avoided. As a result, CARI's 19 toneholes
are arranged linearly. Each tonehole is independently operable
via a solenoid-driven keying valve. Sound is produced by an
audio signal routed to the compression driver, which is directly
coupled to the cylindrical air column. Figure 4 diagrams
CARI’s acoustic, control, audio, and visual elements.
Figure 4. Functional diagram of CARI.
4.2 Control Systems
AMI's acoustic control system includes mechanisms to pick and
damp the string. AMI's 17 solenoid driven tangents are
positioned at fixed equal tempered half step intervals, giving a
range of pitches from E2-A3. The tangents can be used to
articulate notes without picking (hammer-ons). Varying the
duty cycles of trills and tremolos of tangents above 20Hz
produces timbral shifts. The tangents operate in conjunction
with a moving bridge to allow both discrete and continuous
control of string length.
CARI is outfitted with 19 solenoids that change the length of
the air column by opening and closing toneholes. These
actuators can achieve trills and tremolos up to 55 Hz. In
addition, thousands of “fingering” configurations are possible,
many of which would be inaccessible to a human performer on
a standard clarinet.
4.3 Audio Systems
4.3.1 Input and Output Transducers
AMI and CARI’s acoustic elements can be utilized as
automatically tunable acoustic filters when driven by onboard
audio-rate actuators. AMI’s string can be excited via a custom-
built electromagnetic “bowing” mechanism (E-driver) [11, 2,
4]. CARI’s cylindrical air column can be excited by a
compression driver to which it is coupled. In typical usage
scenarios, an input audio signal will be tuned to match resonant
frequencies of the acoustic element, which are manipulated by
AMI and CARI’s pitch control mechanisms. By tuning an input
signal to harmonics of a fundamental frequency (CARI's odd
harmonics; both even and odd string harmonics on AMI), the
instruments’ ranges can be extended well above the
fundamental frequencies, giving each instrument a range of
more than five octaves.
4.3.2 Inter-instrument Connections
Interconnections between AMI and CARI (Figure 5) allow
audio to be routed between the two instruments to create
instrument hybridizations that have previously been accessible
only in the virtual realm of physical modeling. For example, the
plucked string sound from AMI can be used to drive CARI’s air
column, creating a “plucked air column.” Conversely, sound
from CARI’s air column can be sent to AMI’s E-Driver to
create a “blown string.”
Figure 5. The interconnected audio systems of AMI and
CARI that together constitute MARIE.
4.4 Software Control of MARIE
MARIE can be controlled by any software or hardware capable
of generating audio signals and MIDI messages. However, in
order to access the more sophisticated features of the
instruments, we have developed a control panel based in
Cycling 74's Max environment. The Max MARIE Console
centralizes control over the instruments' acoustic, audio, and
visual systems and allows for automation of note generation
and shaping, signal mixing and routing, and lighting functions.
The panel manages the timing of various messages and
simplifies complex control operations, such as generating the
combination of MIDI messages and audio signal modulations
necessary to produce a note on CARI.
5. MUSICAL EXPLORATIONS OF MARIE
EMMI is dedicated not only to the design and construction of
novel robotic instruments, but also to composing music that
takes full advantage of these instruments’ capabilities. The
authors, as well as several other composers, have created new
pieces for MARIE that explore the specific features of this
instrument.
2
5.1 EMMI’s Compositions for MARIE
In addition to hyper-virtuosic speed and rhythmic complexity,
as displayed in From Here to There (Barton), Push for Position
(Barton), and Microbursts (Kemper), MARIE is capable of
dynamic and timbral control, intra- and inter-instrument
feedback, and the decoupling of sound source and resonator.
These new possibilities have been explored in In Illo Tempore
(Kemper), MARIE Explorations (EMMI) and Phantom
Variations (Rogers). Rogers’ Improvisation X series unifies all
of the performance concepts described here as an interactive
framework for real-time free improvisation with human
performers.
5.2 SMC 2012 Curated Concert
One indicator of an instrument’s successful design is the ability
for other musicians to be creative with it. EMMI achieved this
milestone in 2012, hosting a curated concert of new pieces for
MARIE and Transportable Automatic Percussion Instrument
(TAPI) for the 2012 Sound and Music Computing conference
in Copenhagen [17]. Composers from the U.S., Canada, and the
U.K. were invited to write new pieces for MARIE. The
resulting works utilized a variety of software systems and
consisted of acoustic instruments and MARIE (Nebula
SqueezeLane), interactive systems (UntitledTrail,
DétenteMiller), and algorithmic systems (Coming
Together:EMMIEigenfeldt, Blues for NancarrowCollins).
6. FUTURE DIRECTIONS
Given MARIE's immense parameter space, along with its status
as an actively touring instrument, some of the original design
concepts have yet to be fully implemented and explored,
including the moving bridge, digitally controlled on-board
effects circuitry, and on-board video. We continue to optimize
and improve upon AMI's pickup and picking mechanisms, and
may incorporate additional features such as automated string
tuning in future iterations [9].
7. ACKNOWLEDGMENTS
MARIE was generously funded by the backers of a Kickstarter
campaign. Godfried-Willem Raes provided endless ideas and
guidance during and following Rogers’ residency at the Logos
Foundation in Ghent, Belgium, which was made possible by the
Logos Foundation and a Fulbright Research Fellowship. EMMI and
EAR Duo's Northeast US was funded in part by Meet the Composer
grants, and residencies at Brandeis, STEIM and De Lindenberg were
essential in refining the instruments and music.
2
www.expressivemachines.com/MARIE-Compositions
8. REFERENCES
[1] E. J. Berdahl and G. Niemeyer, and J. O. Smith III,
Feedback Control of Acoustic Musical Instruments.
CCRMA Report no. 120, Stanford University, CA, 2008.
[2] P. Bloland. The Electromagnetically-Prepared Piano and its
Compositional Implications, In Proceedings of the 2007
International Computer Music Conference. Copenhagen,
Denmark, 2007, 125-128.
[3] R. Dannenberg, et al, McBlare: a robotic bagpipe player.
In Proceedings of the 2005 conference on New Interfaces
for Musical Expression. National University of Singapore,
2005.
[4] M. A. Fabio, The Chandelier: An Exploration in Robotic
Musical Instrument Design. M.S. Thesis, M.I.T., Cambridge,
MA, 2007.
[5] H. Hart, Robotic Ensemble MARIE Will Jam With Humans (If
the Money’s Right). http://www.wired.com/2010/12/marie-
robot-music-ensemble/, Accessed January 19, 2015.
[6] A. Kapur, A history of robotic musical instruments. In
Proceedings of the 2005 International Computer Music
Conference. Barcelona, Spain, 2005.
[7] L. Maes, G.-W. Raes, and T. Rogers, The Man and Machine
robot orchestra at Logos. In Computer Music Journal 35(4),
M.I.T.a Press, Cambridge, MA, 2011, 2848.
[8] J.,D. McVay, A. Carnegie, J. W. Murphy, and A. Kapur.
Mechbass: A systems overview of a new four-stringed robotic
bass guitar. In Proceedings of the 2012 Electronics New
Zealand Conference, Dunedin, New Zealand. 2012.
[9] J. Murphy, P. Mathews, A. Kapur, and D. A. Carnegie, Robot:
Tune Yourself! Automatic Tuning in Musical Robotics. In
Proceedings of the 2014 International Conference on New
Interfaces for Musical Expression, London, United Kingdom,
2014, 565-568.
[10] R. Olbeter, Fast Blue Air. Roland Olbeter - Set Designer and
Rob Art. http://www.olbeter.com/fast_blue.html, Accessed
November 19, 2014.
[11] G.-W. Raes, Expression Control in Automated Musical
Instruments. http://logosfoundation.org/g_texts/expression-
control.html, Accessed April 14, 2015.
[12] G.-W. Raes, <Hybr>,
http://logosfoundation.org/instrum_gwr/hybr.html, Accessed
April 14, 2015.
[13] Robotic Clarinet Wins Orchestra Competition. University of
New South Wales Newsroom, http://goo.gl/GFkcPk, Accessed
January 19, 2015.
[14] E. Singer, K. Larke, and D. Bianciardi, (2003). LEMUR
GuitarBot: MIDI Robotic String Instrument. In
Proceedings of the 2003 Conference on New Interfaces for
Musical Expression (NIME-03), Montreal, Canada, 2003,
188-191.
[15] J. Solis, K. Chida, K. Suefuji, and A. Takanishi, The
development of the anthropomorphic flutist robot at Waseda
university. International Journal of Humanoid Robots (IJHR) 3,
2006, 127-151.
[16] J. Solis et. al., Mechanism Design and Air Pressure Control
System Improvements of the Waseda Saxophonist Robot. In
2010 IEEE International Conference on Robotics and
Automation (ICRA), 2010, 4247.
[17] Concert 4: Music Robots.
http://smc2012.smcnetwork.org/program-2/program/ Accessed
January 25, 2015.
Video of MARIE: https://youtu.be/KOIUvFIPfts
... This technique has also been explored in several of Raes' robotic instruments, including the previously-mentioned <Hurdy>, as well as the electromagnetically-actuated Aeolian harp <Aeio> [13]. A similar technique was also explored in the author's earlier robotic instrument MARIE, co-created with Expressive Machines Musical Instruments (EMMI) [15]. Though these technologies represent interesting hybrids of electronic/acoustic sound production where the string becomes a physical filter, their sonic output tends to be quite pure in tone, lacking the unique timbral nuance produced by the mechanical articulation of a string (though this is controllable based on input signal). ...
... The goal of producing an instrument with continuously-varying dynamic control made programming the Tremolo-Harp more complicated than previous approaches to robotic stringed instruments. For example, on EMMI's Automatic Monochord Instrument (AMI), note on messages are sufficient to depress a tangent (to control pitch) and pick the string [15]. However, exploring the capabilities of the Tremolo-Harp for sustain and continuously-varying dynamics means that a variety of different control techniques may be used for a single note gesture. ...
Conference Paper
Full-text available
The Tremolo-Harp is a twelve-stringed robotic instrument, where each string is actuated with a DC vibration motor to produce a mechatronic “tremolo” effect. It was inspired by instruments and musical styles that employ tremolo as a primary performance technique, including the hammered dulcimer, pipa, banjo, flamenco guitar, and surf rock guitar. Additionally, the Tremolo-Harp is designed to produce long, sustained textures and continuous dynamic variation. These capabilities represent a different approach from the majority of existing robotic string instruments, which tend to focus on actuation speed and rhythmic precision. The composition Tremolo-Harp Study 1 (2019) presents an initial exploration of the Tremolo-Harp’s unique timbre and capability for continuous dynamic variation.
... Other approaches include using LEDs to visualize sound production (e.g. Rogers et al., 2015). ...
Article
Full-text available
The field of musical robotics presents an interesting case study of the intersection between creativity and robotics. While the potential for machines to express creativity represents an important issue in the field of robotics and AI, this subject is especially relevant in the case of machines that replicate human activities that are traditionally associated with creativity, such as music making. There are several different approaches that fall under the broad category of musical robotics, and creativity is expressed differently based on the design and goals of each approach. By exploring elements of anthropomorphic form, capacity for sonic nuance, control, and musical output, this article evaluates the locus of creativity in six of the most prominent approaches to musical robots, including: 1) nonspecialized anthropomorphic robots that can play musical instruments, 2) specialized anthropomorphic robots that model the physical actions of human musicians, 3) semi-anthropomorphic robotic musicians, 4) non-anthropomorphic robotic instruments, 5) cooperative musical robots, and 6) individual actuators used for their own sound production capabilities.
... For example, the tangents that change pitch on the Expressive Machines Musical Instruments (EMMI)'s Poly-tangent Automatic multi-Monochord (PAM) are often referred to as fingers. Other visually expressive elements can be added to robotic instruments, such as the inclusion of LEDs on the EMMI's AMI and CARI robots [31]. ...
Conference Paper
Full-text available
Robotic instrument designers tend to focus on the number of sound control parameters and their resolution when trying to develop expressivity in their instruments. These parameters afford greater sonic nuance related to elements of music that are traditionally associated with expressive human performances including articulation, timbre, dynamics, and phrasing. Equating the capacity for sonic nuance and musical expression stems from the "transitive" perspective that musical expression is an act of emotional communication from performer to listener. However, this perspective is problematic in the case of robotic instruments since we do not typically consider machines to be capable of expressing emotion. Contemporary theories of musical expression focus on an "intransitive" perspective, where musical meaning is generated as an embodied experience. Understanding expressivity from this perspective allows listeners to interpret performances by robotic instruments as possessing their own expressive meaning, even though the performer is a machine. It also enables musicians working with robotic instruments to develop their own unique vocabulary of expressive gestures unique to mechanical instruments. This paper explores these issues of musical expression, introducing the concept of mechatronic expression as a compositional and design strategy that highlights the musical and performative capabilities unique to robotic instruments.
... It's emitting the sound, your body is a part of it, and that makes it a more complete circle for the audience to understand. (Leitman 2011: 27) During this decade new collectives such as Eric Singer's Lemur (Singer et al. 2004), Kurt Coble's P.A.M. Band (Sobh, Wang and Coble 2003), the Karmetik Machine Orchestra (Kapur, Darling, Diakopoulos, Murphy, Hochenbaum, Vallis and Bahn 2011) and EMMI (Rogers, Kemper and Barton 2015) arose, most of them facilitated by academic institutions and each contributing a range of new musical robotic technologies. As musical robotics began to be accepted as a discipline within the wider field of computer music, international electroacoustic music-oriented conferences dedicated paper sessions, workshops and keynotes to the topic (NIME 2007). ...
Article
Full-text available
The discipline of electroacoustic music is most commonly associated with acousmatic musical forms such as tape-music and musique concrète, and the electroacoustic historical canon primarily centres around the mid-twentieth-century works of Pierre Schaeffer, Karlheinz Stockhausen, John Cage and related artists. As the march of technology progressed in the latter half of the twentieth century, alternative technologies opened up new areas within the electroacoustic discipline such as computer music, hyper-instrument performance and live electronic performance. In addition, the areas of electromagnetic actuation and musical robotics also allowed electroacoustic artists to actualise their works with real-world acoustic sound-objects instead of or along side loudspeakers. While these works owe much to the oft-cited pioneers mentioned above, there exists another equally significant alternative history of artists who utilised electric, electronic, pneumatic, hydraulic and other sources of power to create what is essentially electroacoustic music without loudspeakers. This article uncovers this ‘missing history’ and traces it to its earliest roots over a thousand years ago to shed light on often-neglected technological and artistic developments that have shaped and continue to shape electronic music today.
... For example, the tangents that change pitch on the Poly-tangent Automatic multi-Monochord (PAM), built by Expressive Machines Musical Instruments (EMMI), are often referred to as fingers (Fig. 5). Other visual elements can be added to robotic instruments, such as the inclusion of LEDs on EMMI's MARIE and TAPI robots (Color Plate B) [37]. LEDs can amplify the movements of these instruments by visualizing them. ...
... The linear solenoids can articulate discrete notes, and the traveling carriages allow for continuous pitch production. EMMI's AMI is designed with both fixed tangents and a moving bridge so that both rapid sequential pitch intervals and portamenti can be achieved [8]. ...
Conference Paper
Full-text available
Human-robot musical interaction typically consists of independent, physically-separated agents. We developed Cyther - a human-playable, self-tuning robotic zither – to allow a human and a robot to interact cooperatively through the same physical medium to generate music. The resultant co- dependence creates new responsibilities, roles, and expressive possibilities for human musicians. We describe some of these possibilities in the context of both technical features and artistic implementations of the system.
Article
Full-text available
The Electromagnetically Prepared Piano device allows for direct control of piano strings through the use of an array of electromagnets. Created several years ago at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA), the EMPP differs significantly from previous instruments based on similar principles in that each magnet is controlled by an arbitrary external audio signal, resulting in a much higher degree of control over pitch and timbre. The resultant sounds range from simple sine tones through complex, often ethereal textures. For the most part, these timbres are more evocative of electronically synthesized sonorities than of the acoustic piano strings from which they emanate. This paper has three primary goals: 1) to examine the compositional implications of such a hybrid instrument, 2) to describe several of the compositions that have utilized the device, and 3) to provide a detailed mechanical description for others who may wish to experiment with such a device.
Conference Paper
Full-text available
This paper describes the LEMUR GuitarBot, a robotic musical instrument composed of four independent MIDI controllable single-stringed movable bridge units. Design methodology, development and fabrication process, control specification and results are discussed.
Article
Full-text available
An overview of the various automata of the Man and Machine robot orchestra found at the Logos Foundation in Ghent, Belgium, is presented. This extensive orchestra features over 45 organ-like instruments, monophonic wind instruments, string instruments, percussion instruments, and noise generators. One automaton of each instrument category discussed in detail, including the design, construction, expressive capabilities, and limitations. Descriptions of six compositions that demonstrate the wide usability of the automata are included in the overview. The motivation for Logos's interest and involvement in robotics is derived from the view that loudspeakers are virtualizations of an acoustic reality as sound sources.
Article
This thesis presents several works involving robotic musical instruments. Robots have long been used in industry for performing repetitive tasks, or jobs requiring superhuman strength. However, more recently robots have found a niche as musical instruments. The works presented here attempt to address the musicality of these instruments, their use in various settings, and the relationship of a robotic instrument to its human player in terms of mapping and translating gesture to sound. The primary project, The Chandelier, addresses both hardware and software issues, and builds directly from experience with two other works, The Marshall Field's Flower Show and Jeux Deux. The Marshall Field's Flower Show is an installation for several novel musical instruments and controllers. Presented here is a controller and mapping system for a Yamaha Disklavier player piano that allows for real-time manipulation of musical variations on famous compositions. The work is presented in the context of the exhibit, but also discussed in terms of its underlying software and technology. Jeux Deux is a concerto for hyperpiano, orchestra, and live computer graphics.
Article
This paper presents a history of robotic musical instruments that are performed by motors, solenoids, and gears. Automatic mechanical musical instruments from pianos, to turntables, to percussion, to plucked and bowed strings to wind and horns are presented. Quotes from interviews with a number of artists, engineers, and scientists who have built robotic instruments are included. Personal motivations, skill required for building musical robots, as well as future directions of the field of study are also discussed.
Conference Paper
Since 2007, the research on the anthropomorphic saxophonist robot at Waseda University aims in understanding the human motor control from an engineering point of view as well as an approach to enable the interaction with musical partners. As a result of our research, last year we have introduced the Waseda Saxophonist Robot No. 1 (WAS-1), composed of 15-DOFs that reproduced the lips (1-DOF), tonguing (1-DOF), oral cavity, lungs (2-DOF) and fingers (11-DOFs). However, even that the mouth mechanism of WAS-1 was useful in order to adjust the pitch of the saxophone sound, the range of sound pressure was too narrow. Thus, no dynamic effects of the sound can be reproduced (i.e. crescendo and decrescendo). Moreover, the finger mechanism was designed only to play from C3-C#5. On the other hand, a cascade feedback control system has been implemented in the WAS-1; however, a considerable delay in the attack time to reach the desired air pressure was detected. Therefore, in this paper, the Waseda Saxophone Robot No. 2 (WAS-2) which is composed by 22-DOFs is detailed. The lip mechanism of WAS-2 has been designed with 3-DOFs to control the motion of the lower, upper and sideway lips. In addition, a human-like hand (16 DOF-s) has been designed to enable to play all the keys of the instrument. Regarding the improvement of the control system, a feed-forward control system with dead-time compensation has been implemented to assure the accurate control of the air pressure. A set of experiments were carried out to verify the mechanical design improvements and the dynamic response of the air pressure. As a result, the range of sound pressure has been increased and the proposed control system improved the dynamic response of the air pressure control.
Article
The development of the flutist robot at Waseda University since 1990 has enabled a better understanding of the motor control functions required for playing the flute. Moreover, it has introduced novel ways of interaction between human beings and humanoid robots such as: performing a musical score together in real time and transferring skills to flutist beginners. In this paper, the development of the Waseda Flutist Robot No. 4 Refined (WF-4R) is presented. The mechanical design of the components of the robot and the control architecture are detailed. In order to efficiently control and coordinate the motion of each of the simulated organs of the robot, an algorithm was proposed to extract the features required to perform a score based on human performance. This algorithm was divided into two phases: sound calibration and music score performance. Finally, an experimental setup was done to verify the effectiveness of each of the phases by analyzing the time and frequency domain responses from recordings of the robot performances. The WF-4R is able to perform from musical scores quite similar to human.
Article
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007. Includes bibliographical references (leaves 169-173). This thesis presents several works involving robotic musical instruments. Robots have long been used in industry for performing repetitive tasks, or jobs requiring superhuman strength. However, more recently robots have found a niche as musical instruments. The works presented here attempt to address the musicality of these instruments, their use in various settings, and the relationship of a robotic instrument to its human player in terms of mapping and translating gesture to sound. The primary project, The Chandelier, addresses both hardware and software issues, and builds directly from experience with two other works, The Marshall Field's Flower Show and Jeux Deux. The Marshall Field's Flower Show is an installation for several novel musical instruments and controllers. Presented here is a controller and mapping system for a Yamaha Disklavier player piano that allows for real-time manipulation of musical variations on famous compositions. The work is presented in the context of the exhibit, but also discussed in terms of its underlying software and technology. Jeux Deux is a concerto for hyperpiano, orchestra, and live computer graphics. (cont.) The software and mapping schema for this piece are presented in this thesis as a novel method for live interaction, in which a human player duets with a computer controlled player piano. Results are presented in the context of live performance. The Chandelier is the culmination of these past works, and presents a full-scale prototype of a new robotic instrument. This instrument explores design methodology, interaction, and the relationship-and disconnect-of a human player controlling a robotic instrument. The design of hardware and software, and some mapping schema are discussed and analyzed in terms of playability, musicality, and use in public installation and individual performance. Finally, a proof-of-concept laser harp is presented as a low-cost alternative musical controller. This controller is easily constructed from off-the-shelf parts. It is analyzed in terms of its sensing abilities and playability. Michael A. Fabio. S.M.
McBlare: a robotic bagpipe player
  • R Dannenberg
R. Dannenberg, et al, McBlare: a robotic bagpipe player. In Proceedings of the 2005 conference on New Interfaces for Musical Expression. National University of Singapore, 2005.