Conference PaperPDF Available



Abstract and Figures

The SPRAWL system is an audio and information routing network, enabling enhanced ways of interaction for musical ensembles. A Linux audio server with a SuperCollider mixing and spatialization system is connected to several access points via Ethernet, using Jack-Trip for audio transmission. The unified access points are based on a Raspberry Pi 4 with a 7 inch touch screen and a USB audio interface , running SuperCollider, Puredata and additional tools. Using custom applications on these units, musicians are able to control the routing and spatialization of the system, allowing personal monitoring , the distribution of click tracks and related auditory information, as well as visual information and cues. The system is designed as a meta-instrument for developing individual configurations and software components for specific compositions and use cases. It has been successfully used in on-site concerts and ported to wide area networks for remote rehearsals and performances.
Content may be subject to copyright.
Proceedings of the 18th Linux Audio Conference (LAC-20), SCRIME, Université de Bordeaux, France, November 25–27, 2020
Henrik von Coler
Audio Communication Group
TU Berlin
Nils Tonnätt
Audio Communication Group
TU Berlin
Vincent Kather
Audio Communication Group
TU Berlin
Chris Chafe
The SPRAWL system is an audio and information routing network,
enabling enhanced ways of interaction for musical ensembles. A
Linux audio server with a SuperCollider mixing and spatialization
system is connected to several access points via Ethernet, using Jack-
Trip for audio transmission. The unified access points are based on
a Raspberry Pi 4 with a 7 inch touch screen and a USB audio in-
terface, running SuperCollider, Puredata and additional tools. Using
custom applications on these units, musicians are able to control the
routing and spatialization of the system, allowing personal monitor-
ing, the distribution of click tracks and related auditory information,
as well as visual information and cues. The system is designed as
a meta-instrument for developing individual configurations and soft-
ware components for specific compositions and use cases. It has
been successfully used in on-site concerts and ported to wide area
networks for remote rehearsals and performances.
Performing electroacoustic music in larger ensembles on multi chan-
nel loudspeaker systems offers a plethora of additional possibilities
and creative means for musicians and composers alike. However,
this practice also gives rise to a very specific set of requirements and
problems. Some of them are of simple practical nature, including
technical solutions and infrastructural challenges like the distribution
of audio and control data. Other aspects relate to aesthetic concepts
and individual implementations for specific compositions or impro-
visational setups. The technical system proposed in this paper is
intended as a platform for solving these problems by offering a pre-
defined basic structure and the possibility of implementing custom
1.1. Related Concepts
Many challenges and problems in live electroacoustic music are not
unique to specific ensembles but universal and reoccurring. Hence,
several projects in the history of live electronic music performance
designed systems for solving these issues, relying on different para-
digms. The examples in this section focus on the area of laptop en-
sembles, based on digital technology. Although the scope of this
paper is on joint music performances in general and not on geo-
graphically distributed practices, some related approaches originate
in distributed network performance and telepresence.
Substantial pioneering work in the field of laptop ensembles and
interdependent computer compositions was done by The HUB be-
tween 1986 and 1997 [1, 2]. The first version of the HUB was de-
signed as a blackboard system, implemented on Synertek SYM 6502
single board computers. A shared memory allowed the exchange
of information between six musicians, respectively their individual
autonomous programs. This enhanced way of interaction was the
central point for the compositions explored by the HUB.
A second version of the HUB, launched in 1990, made use of
the MIDI protocol for exchanging the information. Using individ-
ual MIDI channels for each system, the protocol allowed the direct
control among the musicians, leading to more possibilities.
The Frequencyliator [3], developed at SARC in Belfast, com-
bines the use of OSC and network-based audio connections to in-
crease the level of interaction in laptop orchestra. A server is broad-
casting information to connected laptops, based on a score, pre-
defined or live generated. The system does not only manage syn-
chronization and negotiation of timing events but is also manipulat-
ing signals for spectral segregation.
*LORK is used as an acronym for Laptop Orchestra by a group
of closely related approaches to live electroacoustic performance.
The Princeton Laptop Orchestra (PL ORK), formed in 2005 [4, 5],
is the original *LORK. The orchestra is based on unified meta in-
struments, each consisting of a laptop and its own hemispherical
speaker and a rack with the necessary periphery. Inspired by the
principles of acoustic ensembles, each of the 15 instruments thus
is an individual sound source, resulting in a sonic display similar
to a classical orchestra. MAX/MSP, Chuck and SuperCollider are
used as programming environments. Synchronization and commu-
nication among musicians are realized through network protocols.
Additional interfaces for expressive control are included for specific
compositions. Other ensembles adapted the Princeton setup, in order
to establish a widespread standard and hence with the possibility to
create a common repertoire. Among them are the Stanford Laptop
Orchestra (SLORK) [6] and the Linux Laptop Orchestra (L2Ork) [7].
The Huddersfield Experimental Laptop Orchestra (HELO) was
founded in 2008 as an ensemble of graduate and undergraduate stu-
dents [8]. Instead of unified meta instruments like the *LORKs, the
HELO aims at bringing together a variety of systems which may rely
on any hardware and software or operating system. This diversity
Proceedings of the 18th Linux Audio Conference (LAC-20), SCRIME, Université de Bordeaux, France, November 25–27, 2020
is intended to increase the work on a creative level without spend-
ing a significant amount of time with technical goals. Similar to
the *LORKs, each musician’s sound is produced with an individual
loudspeaker located close to the instrument. One setup of the HELO
makes use of the laptops’ builtin speakers for high portability and
simplicity. For increased volume and higher sound quality, a setup
with one guitar amp for each musician is used. The guiding principle
of this approach is to minimize the technical overhead for the sake
of dealing with musical content, instead.
EmbodiNet [9, 10] is a reactive environment for Network Music
Performance, intended to be used in loose rehearsals or jam sessions
between geographically remote participants. Using SuperCollider
for mixing and JackTrip for audio transmission, the ready system
offers two control functions, which allow the musicians to influence
individual headphone mixes, namely the dynamic volume mixing and
the enhanced stereo panning. Cameras and motion capturing are
used to create a shared visual space with an additional GUI for con-
trolling the mixing attributes.
1.2. The EOC
The Electronic Orchestra Charlottenburg (EOC) is an ensemble for
live electroacoustic performances on multi-channel loudspeaker sys-
tems, making use of sound field synthesis technologies. Founded in
2017 as part of a Audio Communication Group seminar at Techni-
cal University Berlin, the EOC now consists of 10 active members,
including musicians from the independent scene and alumni and stu-
dents from the Audio Communication Group .
A central aspect of the project was to create a setting for the in-
teraction of diverse musical instruments, in particular those invented
by researchers and students in the intersection of experimental mu-
sic and music technology. Too often, the results of research projects
or classes related to electronic musical instrument design are not ap-
plied appropriately, before people move on. However, due to the
modular boom in the past years, it now features seven modular syn-
thesizers, complemented with live electronics and tape as well as a
Pushpull [11], an individual digital musical instrument.
In the first years, the EOC has been operating in a hierarchic
structure, with a sound director in charge of mixing, coordination
and spatialization. An audio server with an attached AD/DA con-
verter rack gathers all instrument signals and distributes the rendered
spatial audio scene to the loudspeaker system. Alternatively, some
compositions and adaptions rely on algorithms and automation for
the control of the above mentioned components. The future direc-
tion of the EOC is to empower the individual musicians to control
not only specific attributes of their own instrument’s sound, but also
to interact with other participants and influence parameters of the
complete system.
1.3. Goals
First and foremost, the SPRAWL system is designed as a flexible
meta-instrument for experimental music. Although it comes with a
ready-made mode for simply connecting musicians from different lo-
cations, a key feature is its adaptability. SuperCollider and Puredata
allow the quick development of individual system configurations for
specific compositions. Besides this open concept, the system aims at
solving several general problems which arose in the work with the
EOC and other experimental electronic ensembles.
1.3.1. Pre-Listening and Monitoring
Working with electronic musical instruments and modern perfor-
mance practices often involves the use of headphones for different
purposes. Experience has shown that these purposes can be conflict-
ing and that a solution for combining them is desirable. One of the
goals of the proposed system is to offer a flexible solution for com-
bining reoccurring applications of headphones. Ideally, musicians
are enabled to switch between these applications and even mixing
them. By no means exhaustive, the following use cases need to be
1. pre-listening the own instrument
2. monitoring of the full ensemble
3. click-tracks and other auditory information
4. pre-listening other instruments
When operating synthesizers, especially modular ones or com-
parably complex systems, musicians must be able to pre-listen sounds
they are designing during a performance. If a specific sound needs
to be programmed or patched, the synthesizer needs to be muted,
connections are changed and undesirable artifacts may occur. This
will often happen between pieces in a performance, but also within
specific compositions. Performing with an ensemble on a surround-
ing loudspeaker setup raises several issues considering the placing
of the musicians. In order to preserve the best listening positions
for the audience, it is usually necessary to perform at the boundaries
of the system, if not outside. This calls for a monitoring solution,
in order to enable the musicians to hear the actual mix in the lis-
tening area. The distribution of click-tracks and other auditory cues
for synchronization is a method often used in experimental ensemble
performances. Either as broadcast from a central unit or generated
on the musicians’ devices, these need to be synchronized.
Finally, it can be necessary to exclusively pre-listen other in-
struments in the ensemble. Reasons for this may be tuning or other
interrelated adjustments, as well as composition-specific dependen-
1.3.2. Visual Information
The presentation of visual information plays a central role in perfor-
mances of the EOC. Possible use cases are:
1. synchronized (graphical) scores
2. digital clocks or metronomes
3. representation of the auditory scene
4. visualization of levels (metering)
5. visualization of system parameters (tuning)
It is common practice to show synchronized scores on tablets to
performers or to present a digital clock in order to align live perfor-
mance with playback or automated processes. This is often realized
using tablets with a screen diagonal of about 10".
The EOC usually performs on multi channel loudspeaker sys-
tems, making use of virtual point sound sources for spatialization.
In this case it can be helpful for the ensemble to see the individual
source positions. Thus, each participant is aware of level manipula-
tions due to source distances.
Clipping of input signals is a reoccurring problem, not only in
performances of electroacoustic music. Even if level control is car-
ried out by a sound engineer or sound director, the nature of the
Proceedings of the 18th Linux Audio Conference (LAC-20), SCRIME, Université de Bordeaux, France, November 25–27, 2020
electronic musical instruments can easily lead to an input signal too
high in level. Often this is caused by hardly audible low frequency
components. Several members of the EOC have thus expressed the
wish to monitor their individual input level visually.
The SPRAWL system can be considered a musical meta instrument,
designed to offer a flexible framework for increasing the influence
of single musicians and the level of interaction within an ensemble.
Its primary purpose is to interconnect arbitrary electronic and elec-
troacoustic musical instruments by providing unified access points.
Although conceived in the context of electroacoustic music and elec-
tronic musical instruments, the system offers possibilities for any in-
Access Point 2
...LANAccess Point 1
Access Point N
Figure 1: SPRAWL server-access-point architecture
The overall configuration of the SPRAWL system consists of
a central server, connected to a set of access points, as shown in
Figure 1. Access points and Server transfer audio over JackTrip and
communicate over OSC Messages with SuperCollider. In order to
get the latest version of JackTrip it is recommended to pull it from the
official git repository [12]. The software part of the sprawl system
can be found in the related software repository.1
2.1. Access Points
An access point consists of a Raspberry Pi 4 with a 7 inch touch
screen and a Behringer U-Phoria EMC22 USB audio interface. The
latter can be changed to any class compliant model with slight mod-
ifications to the client software. The housing of the Raspberry Pis
is made of multilayered laser-cut wood, which allows us to easily
reach the ports, is well ventilated and offers the possibility to attach
a thread for a microphone stand. The templates for replicating the
housing are available for download in the SPRAWL git repository.
All components for one access point can be purchased for less than
150 $. Figure 2 shows an access point in a modular rehearsal setup
at the TU Studio. The Raspberry Pis are running the standard full
Raspbian Buster operating system, equipped with the rt-preempt ker-
nel for Raspberry Pi 4 by Florian Paul Schmidt [13]. Since threaded
IRQs are enabled by default, no kernel parameter needs to be added.
Figure 2: Access point in use with a modular system.
Every access point gets two static IP addresses, including one for
the Ethernet connection to the server and a second one for the WiFi
access point. Although the system is usually controlled via the touch
display and can be configured with keyboard and mouse, setting
up all Raspberry Pis as WiFi access points grants quick and conve-
nient access for maintenance through an additional laptop via SSH.
Furthermore, the individual wireless networks can be used for inte-
grating additional control devices and interfaces for musical perfor-
mance. The static IP addresses can be easily assigned using dhcpcd.
Hostapd offers the WiFi access point capability. Dnsmasq assigns IP
addresses to users who log into the WiFi access point.
2.2. Server
The server used in the first seminars at TU Berlin was an AMD
Ryzen workstation, equipped with a RME MadiFX PCIe soundcard
and two Digigram DANTE LX cards. The server is running Fe-
dora 31 with the free MADI FX ALSA driver by Adrian Knoth [14]
that we fixed for the 5.3 kernel version. Different jack-capable tools
for spatial rendering were used, including PanoramixApp and the
SoundScape Renderer. JMess [15] was used for saving and load-
ing the Jack connections between SC and JackTrip. The server used
in the current online seminar at TU Berlin is an Intel Xeon E-2134
(Coffee Lake) with 4 ×3.5 GHz (max. Turbo: 4.5 GHz) and 32
GB DDR4 ECC RAM. A hardware audio interface is not necessary
for this server. The server is running Ubuntu 20.04 with a low la-
tency kernel. A custom mixing and spatial rendering software is
programmed in SuperCollider, based on the SC-HOA library [16].
Jack-matchmaker provides pattern matching capabilities for manag-
ing jack connections [17].
2.3. Access Points Server
JackTrip is used in the Hubserver Mode. The mixing of all channels
sent from the access points is done exclusively on the server with a
SuperCollider server that receives OSC messages. For every access
point there is one input module on the server side, shown in Figure 3.
The input signal gets mixed to the send busses of the access points
and to the send busses of the spatial rendering unit. The monitor
module sends the binaural rendered scene to the access points, so
that every musician is able to hear the music accordingly to the au-
dience. All gains can be set with OSC commands from every access
Proceedings of the 18th Linux Audio Conference (LAC-20), SCRIME, Université de Bordeaux, France, November 25–27, 2020
Output Modules
Input Module n
To Access
to Access
Point 1
Jacktrip from
Access Point n
Send Busses
to Access
Point N
To Spatial
Virtual Sound
Source 1 ...
Send Busses
Virtual Sound
Source N
Monitor Module
From Binaural
to Access
Point 1
Send Busses
to Access
Point N
Figure 3: Server input module with send gains, output modules and JackTrip connections.
point. Positions of virtual sound sources in the binaural mix can be
controlled in spherical coordinates via OSC commands. The startup
of both systems – server and access points – is organized by shell
scripts which are part of the software repository.
3.1. In Class
Figure 4: GUI of the generic access point software, allowing control
over monitoring and spatialization in 2D.
As reported by Wang et al. [18], laptop orchestra and related
approaches offer great possibilities for teaching. The use of unified
systems allows a focus on joint development, eliminating the prob-
lem of having to configure numerous individual hardware and soft-
ware setups. The SPRAWL system is not only intended as a teaching
environment, but was also conceived and created within seminars at
Technical University Berlin. By involving the students at an early
stage, their knowledge on the relevant aspects of Linux audio sys-
tems and the necessary concepts is fostered. A generic access point
software, programmed in SuperCollider, is delivered with the repos-
itory. With the GUI shown in Figure 4, it offers control over the
pre-listening and monitoring, as well as the position of the virtual
sound source in two dimensions. The first sessions in the seminar
focus on the use of the generic access point software with the GUI
shown in Figure 4. Students were thus able to influence the posi-
tion of the virtual sound source related to their instrument’s sound
and to control the monitoring. Puredata, SuperCollider and Python
are installed as default tools for developing access point applications,
depending on the use case. In further sessions students program indi-
vidual software for manipulating server parameters and for creating
automated movements of virtual sound sources on the access points,
starting with Puredata.
3.2. In Concert
Figure 5: Access point GUI for the concert at Silent Green.
The SPRAWL system was used in a first public concert at Silent
Green2, Berlin, in February 5 2020. The network approach allowed
placing ten access points for the musicians throughout the 17 meter
tall dome on three different levels, intermixed with the audience. The
loudspeaker setup was arranged on two levels, with a ring of eight
Meyer UPL-1s on ground level and a quadraphonic subsystem with
QSC K8 speakers on the third level balcony. A custom access point
software with the GUI shown in Figure 5 was programmed for this
Proceedings of the 18th Linux Audio Conference (LAC-20), SCRIME, Université de Bordeaux, France, November 25–27, 2020
concert. It features a VU meter and pre-listening gain for the audio
input level of the individual unit. The complete mix of all access
points can be monitored with the slider Binaural. The slider banks
on the right side control send buses gains of the mixing and spatial-
ization server. Pink sliders send the signal of the related access point
to ten individual automated source movement patterns. These pat-
terns were programmed in Puredata by students in the classes and
sent from the individual access points during runtime. The yellow
sliders control send buses to individual speakers and speaker groups,
hence allowing the positioning of the own signal through panning.
Within the concert, the EOC performed Remote Control by Chris
Chafe (1991). This minimal, text-based composition plays with tran-
sitions between a chaotic sound mixture and a stationary mix of pure
tones. Such transitions can be supported through the spatial send bus
structure of the access point software introduced above. An addi-
tional trigger button (HORN) is programmed to play back samples
of ships horns. Each access point was equipped with an individ-
ual recording of an actual ship’s horn from large vessels. This fea-
ture was used for the composition Harbor Symphony (Chris Chafe,
2016-20), performed by TU students and merging into a free impro-
3.3. Wide Area Applications
During the 2020 pandemic, the SPRAWL system could be easily
moved to the internet for long distance connections between Berlin,
Vienna and Cologne. Experiments with an ensemble for contem-
porary music included click tracks for easing the synchronization
despite the significant latency, distributed from the server. Finally,
the access points are not limited to the use with the dedicated server,
but can connect to any remote Jacktrip server. This has been tested
within the ongoing Quarantine Sessions, hosted by CCRMA, Stan-
ford. In these weekly concerts, musicians and visual artists from all
over the world connect to a server running a software setup simi-
lar to the SPRAWL solution. Several members of the EOC could
participate in these concerts, without needing to install software on
additional computers. With only few modifications, the access points
connect to the selected JackTrip Server. This concept is further ex-
plored in ongoing classes at TU Berlin during times of restricted
access to the university facilities.
The SPRAWL system offers a flexible solution for network based
musical performances. The system is suitable for both conducted
and decentralized music. Basic demands like pre-listening and mon-
itoring as well as visual feedback are satisfied. In both local and wide
area network applications, the system performed reliably. After the
successful launch, the setup can now be used to explore existing and
forthcoming compositions and performance setups.
[1] John Bischoff, “Software as sculpture: Creating music from
the ground up,” Leonardo Music Journal, vol. 1, no. 1, pp.
37–40, 1991.
[2] Scot Gresham-Lancaster, “The aesthetics and history of the
hub: The effects of changing technology on network computer
music,” Leonardo Music Journal, vol. 8, 1998.
[3] Alain Renaud and Pedro Rebelo, “Network performance:
Strategies and applications, in Proceedings of the Interna-
tional Conference on New Interfaces for Musical Expression
(NIME), 2006.
[4] Daniel Trueman, Perry R Cook, Scott Smallwood, and
Ge Wang, “PLOrk: The Princeton Laptop Orchestra, Year 1,
in Proceedings of the Intenational Computer Music Conference
(ICMC), 2006.
[5] Scott Smallwood, Dan Trueman, Perry R Cook, and Ge Wang,
“Composing for Laptop Orchestra,” Computer Music Journal,
vol. 32, no. 1, pp. 9–25, 2008.
[6] Ge Wang, Nicholas J Bryan, Jieun Oh, and Robert Hamilton,
“Stanford laptop orchestra (slork),” in Proceedings of the In-
ternational Computer Music Conference (ICMC), 2009.
[7] Ivika Bukvic, Thomas Martin, Eric Standley, and Michael
Matthews, “Introducing L2Ork: Linux Laptop Orchestra.,” in
NIME, 2010, p. 170–173.
[8] Scott Hewitt, Pierre Alexandre Tremblay, Samuel Freeman,
and Graham Booth, “HELO: The laptop ensemble as an in-
cubator for individual laptop performance practices, ICMA,
[9] Dalia El-Shimy and Jeremy R Cooperstock, “Reactive envi-
ronment for network music performance,” in Proceedings of
the International Conference on New Interfaces for Musical
Expression (NIME), 2013, pp. 158–163.
[10] Dalia El-Shimy and Jeremy R Cooperstock, “EmbodiNet: En-
riching distributed musical collaboration through embodied in-
teractions,” in IFIP Conference on Human-Computer Interac-
tion. Springer, 2015, pp. 1–19.
[11] Amelie Hinrichsen, S Hardjowirogo, D Hildebrand Marques
Lopes, and TILL Bovermann, “Pushpull. reflections on build-
ing a musical instrument prototype,” in Proceedings of the In-
ternational Conference on Life Interfaces, 2014.
[12] Juan-Pablo Caceres and Chris Chafe, “JackTrip: multi-
machine audio network performance over the internet,, (ac-
cessed November 21, 2020).
[13] Florian Paul Schmidt, “rt-preempt kernel for rasp-
berry pi 4 [raspbian buster] including usb-lowlatency
viewtopic.php?t=250927, 2019 (accessed October 21,
[14] Adrian Knoth, “Linux ALSA driver for RME MADI FX,”, 2019 (ac-
cessed November 21, 2020).
[15] Juan-Pablo Caceres, “JMess - A utility to save your audio con-
nections (mess),”
jmess-jack, (accessed November 21, 2020).
[16] Florian Grond, “HOA wrapper classes for SuperCollider, HOA,
2020 (accessd November 21, 2020).
[17] Christopher Arndt, “Auto-connect new JACK ports match-
ing the patterns given on the command line, https:
2016 (accessd November 21, 2020).
[18] Ge Wang, Dan Trueman, Scott Smallwood, and Perry R Cook,
“The laptop orchestra as classroom,” Computer Music Journal,
vol. 32, no. 1, pp. 26–37, 2008.
... This 5 concept has been used in previous projects for distributed spatial audio performances [10]. ...
... Based on free and open source software, the proposed system is an open source project itself and can be accessed through the related GitHub repository. 10 A detailed documentation with instructions for musicians, as well as setup instructions for system administrators can be found in the 9 documentation/supercollider/ 10 ...
Conference Paper
Full-text available
The presented software system aims at a combined use of different spatial sound reproduction methods, such as Wave Field Synthesis and Ambisonics, in a robust, user-friendly workflow. The rendering back-end is based on free and open source components, running on multiple Linux servers, thus allowing the operation of large loudspeaker setups. Using a send-based signal routing paradigm with an OSC message distribution software, the individual rendering engines can be combined seamlessly and extended with additional methods. Content can be created and played back using digital audio workstation projects which are unaware of the reproduction systems and make use of OSC automation plugins. This ensures a straightforward transfer of content between different sites and speaker configurations. Due to its adaptability, the proposed system is considered a potential solution for comparable setups with larger numbers of loudspeakers.
Conference Paper
Full-text available
The 'liberation of sound' by means of electronics, as anticipated by Edgard Varèse (1966), amongst many others, released musical instruments and musical instrument making from the physical constraints of sound production. While this may sound naïve in light of two decades of musical games and NIME, we consider it a valid and important starting point for design and research in the NIME field. This new freedom of choice required instrument makers to explicitly reflect on questions such as: what general expectations do we have of a contemporary instrument? What do we want it to sound like? And, detached from its sonic gestalt, how should the instrument look, feel and be played? What is it supposed to do, or not to do? Based on these questions, this paper is an interdisciplinary approach to describing requirements for and expectations and promises of expressive contemporary musical instruments. The basis for the presented considerations is an instrument designed and played by the authors. Over the course of the design process, the research team touched on topics such as interaction and mapping strategies in relation to what we call artificially-induced complexity. This complexity, the authors believe, may serve as an alternative common ground, substituting originally prevalent physical constraints in instrument building.
Conference Paper
Full-text available
In this paper we report on the current state of the newly established Princeton Laptop Orchestra (PLOrk), a collection of 15 meta-instruments each consisting of a laptop computer, interfacing equipment, and a hemispherical speaker. Founded in the fall of 2005, PLOrk represents the first laptop ensemble of its size and kind, and brings together many of our research and aesthetic interests as musicians, composers, and computer scientists. Here we chronicle the first steps of the ensemble, including details about the technology, the music, compositional challenges, and what we have learned in the process.
Conference Paper
Full-text available
In the paper, we chronicle the instantiation and adventures of the Stanford Laptop Orchestra (SLOrk), an ensemble of laptops, humans, hemispherical speaker arrays, interfaces, and, more recently, mobile smart phones. Motivated to deeply explore computer-mediated live performance, SLOrk provides a platform for research, instrument design, sound design, new paradigms for composition, and performance. It also offers a unique classroom combining music, technology, and live performance. Founded in 2008, SLOrk was built from the ground-up by faculty and students at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). This document describes 1) how SLOrk was built, 2) its initial performances, and 3) the Stanford Laptop Orchestra as a classroom. We chronicle its present, and look to its future.
Full-text available
This article chronicles our pedagogical adventures in the Princeton Laptop Orchestra (PLOrk). We introduce the PLOrk classroom as well as new approaches and tools for teaching. In doing so, we explore an integrated, naturally interdisciplinary educational environment for composition, performance, and computer science. in such an environment, the learning and internalization of technical knowledge happen symbiotically with acquisition of aesthetic and artistic awareness. There is only one explicit goal: to learn to make compelling computer-mediated music together in an academic setting. All other learning happens "along the way". The presence of experienced guest composers, who compose new works and teach them to the ensemble, allows students to learn about and experiment with varying aesthetic and technical approaches. We believe this is an exciting new environment where the learning of interdisciplinary knowledge is not only natural, but also inevitable (and fun).
Full-text available
The recently formed Princeton Laptop Orchestra (PLOrk) can be said as the first ever orchestra conducting using a laptop computer. The feat is especially relevant as it developed strategies for control, sound design, spatialization, conductor roles, improvisation and instrument design. PLOrk is an ensemble of laptop-based instrumentalists with localized sound sources. Producing sonic space comparable to a large ensemble of instruments that generate sound from various points on a stage, the sound of each player radiating out in all directions. PLOrk used equipment including Apple PowerBook G4s and MacBooks, software development environments Max?MSP, SuperCollider and ChucK, a rack of audio equipment consisting of a multi-channel Firewire interface, speaker amplification and a sensor interface and a hemispherical speaker with six individually addressable speakers. The PLOrk orchestra produced a sound that is both limiting and inspiring. The ensemble is tightly synchronized via a network and is assisted by a specialized conductor.
Conference Paper
This paper presents EmbodiNet, a novel system that augments distributed performance with dynamic, real-time, hands-free control over several aspects of the musicians’ sound, enabling them to seamlessly change volume, affect reverb and adjust their mix. Musical performance is a demanding activity necessitating multiple levels of communication among its participants, as well as a certain degree of creativity, playfulness and spontaneity. As a result, distributed musical performance presents a challenging application area for the “same time/different place” category of Computer-Supported Cooperative Work (CSCW). In fact, musicians wishing to play together over a network are typically limited by tools that differ little from standard videoconferencing. Instead, we propose leveraging the technology inherent to the distributed context towards meaningfully augmenting collaborative performance. In order to do so without introducing new paradigms that may require learning or that may distract musicians from their primary task, we have designed and evaluated embodied controls that capitalize on existing interpersonal interactions. Further designed to restore the spatial properties of sound that are typically absent in the distributed context, and apply the notion of “shared space” found in CSCW research, EmbodiNet also helps confer a greater level of co-presence than standard distributed performance systems. This paper describes the implementation of EmbodiNet, along with the results of a long-term collaboration and experiment with a three-piece band. The long-term collaboration helped illustrate the benefits of augmenting an artistic form of distributed collaboration, and resulted in a system that not only helped enhance our users’ sense of enjoyment and self-expression, but one that they would also likely use in the future.
The author discusses two electronic music compositions that typify his approach to composing with small computer-music systems. The author's compositional technique is characterized by bottom-up software design and close attention to emerging details of the computer medium. These methods result in a sculptural process of composition that is unique in the field of computer music. Some of the details of this process are outlined and distinctive features of the music are discussed.