Conference PaperPDF Available

An Ecosystemic Approach to Augmenting Sonic Meditation Practices

Authors:

Abstract

This paper describes the design and creation of an interactive sound environment project, titled dispersion.eLabOrate. The system is defined by a ceiling array of microphones, audio input analysis, and synthesis that is directly driven by this analysis. Created to augment a Deep Listening performative environment, this project explores the role that interactive installations can fulfill within a structured listening context. Echoing, modulating, and extending what it hears, the system generates an environment in which its output is a product of ambient sound, feedback, and participant input. Relating to and building upon the ecosystemic model, we discuss the benefit of designing for participant incorporation within such a responsive listening environment.
An Ecosystemic Approach to Augmenting Sonic
Meditation Practices
Rory Hoy and Doug Van Nort
DisPerSion Lab
York University
rorydavidhoy@gmail.com, vannort@yorku.ca
Abstract. This paper describes the design and creation of an interactive sound
environment project, titled dispersion.eLabOrate . The system is defined by a
ceiling array of microphones, audio input analysis, and synthesis that is directly
driven by this analysis. Created to augment a Deep Listening performative
environment, this project explores the role that interactive installations can
fulfill within a structured listening context. Echoing, modulating, and extending
what it hears, the system generates an environment in which its output is a      
product of ambient sound, feedback, and participant input. Relating to and  
building upon the ecosystemic model, we discuss the benefit of designing for
participant incorporation within such a responsive listening environment.
Keywords: Interactive Audio, Sonic Ecosystem, Deep Listening
1 Introduction
In contrast to fixed-media works for concert, the generation of a sonic environment for    
an installation context invites participants to traverse a space, wherein their action has
amplified potential to modulate generated sound through manipulation of devices,
interfaces, and the ambience of the room itself. The systematic formation and    
implementation of these interactive works is dependent upon the role of participants
within the space (or lack thereof). Techniques range from a linear system flow wherein
participant action directly drives generated output to a sonic ecosystem approach,
wherein feedback mechanisms establish autonomous and self sustaining sonic activity.
This paper will explore the formation of dispersion.eLabOrate , an interactive
sound environment which began as an augmentation on the form of the “Tuning     
Meditation”, a text piece found within the practice of Deep Listening [1]. This    
meditative and performative context informed the aesthetic and design considerations
employed within the system’s development, due to its need to function as a
collaborative member of the piece, rather than distracting from the focused listening
context in which it was deployed. The system was developed with the design
metaphor of an “active listening room” in mind, reacting both to participants and its
own generated audio. The relationships established between human, machine, and
ambient environment led to exploration of the ecosystemic approach presented by
Agostino Di Scipio [2]. Contending with boundaries put in place by the classical
Rory Hoy and Doug Van Nort
ecosystemic approach, dispersion.eLabOrate presents a model in which the human  
and the machine can act together in the generation of an ecosystem such that the
blending of agency is achieved through the system’s self/ambient observing behavior
and the participant’s ability to be present in this observation. We will discuss the need
to bridge between methodologies for interactive sound environments, presenting an
approach that extends the capabilities of a sonic ecosystem dynamically through
participant input, resulting in spatially distributed parameter changes. These localized
changes can be thought of as generating diverse locations within the sonic ecosystem,  
with input conditions resulting in distinct perceptual effects for both participants and
the ambient sensing of the system.
2 Related Works
2.1 Ecosystemic Framework
Undertaken in Di Scipio [2], the challenge of generating a sonic ecosystem is engaged
by questioning the nature of interactivity, by exploring the limits of “where and
when” this occurs. Di Scipio notes that the majority of interactive systems employ a
linear communication flow in which a participant’s action is the singular cause of
output. Di Scipio then presents an alternate approach in which the principal aim is the    
creation of a dynamical system which can act upon and interfere with the external
conditions that define its own internal state. This approach decentralizes the primal
importance of human agency in the space (apart from ambient noise) and grants the    
ability of self-observation to the system. Di Scipio describes this ability as “a shift
from creating wanted sounds via interactive means, towards creating wanted       
interactions having audible traces”; and it is through these traces that compelling
sonification can occur. This ideation of an audio ecosystem culminates in Di Scipio’s
Audible Eco-Systemic Interface project (AESI) project. This machine/ambience
interrelationship is paramount and understood to function as “interaction”, rather than
the typical human/machine relationship. AESI emits an initial sound that is captured
by two or more microphones in the room. Relevant features are extracted from this
capture, which are then used to drive audio signal processing parameters.
Measurements on differences between microphone signals are used as additional
control values, and the internal state of the AESI is set through functions defined by
this ecosystemic concept. The four functions achieving this are compensation (active
counterbalance of amplitude with the ambient environment), following (ramped value
chasing given a delay time), redundancy (supporting a predominant sound feature),    
and concurrency (supporting a contrasting or competing predominant feature).
These defining ecosystemic characteristics of equilibrium and adjustment are
explored by Haworth [3], who suggests the need to update the ecosystemic model to
reflect current broader thoughts on ecosystems, de-emphasizing stability and
highlighting imbalance and disorder. Haworth identifies two distinct models,
stemming from Di Scipio and Simon Waters. Di Scipio’s form is a cyclical closed
system in which traditional control structures of linear systems in interactive audio
works are dismantled in favor of a self-regulated ambient sensing. Meanwhile, Waters
moves away from tendencies to instrumentalise technology, instead highlighting the
An Ecosystemic Approach to Augmenting Sonic Meditation Practices
role of human attention upon relations formed between each of the components
within a generated ecology. Waters posits, “The notion of Performance Ecosystem
enfolds all three concepts (performer, instrument, environment) and allows room for
the undecideabilities of the virtual domain” [4], depicting this interrelated nature of
ecosystemic components as primary over their intersection with the “virtual domain”.
While an extended examination of these two positions is beyond the scope of this
paper, for the purposes of this discussion it is suitable to work from this relatively
high-level distinction between the two. In so doing, the modified understanding of the
ecosystemic model posed by Haworth [3] and the situated performance ecology of
Waters [4] are most applicable to the system design of dispersion.eLabOrate .    
Incorporating aspects of system self-observation while explicitly designing around
participants’ attentional dynamics, the generated sonic ecosystem deals with the
blending of influence between the system and environmental actors.
2.2 Deep Listening
The practice of Deep Listening was developed by composer Pauline Oliveros in the
1970’s and refined into the 2000’s. It is described by Oliveros as “a practice that is     
intended to heighten and expand consciousness of sound in as many dimensions of
awareness and attentional dynamics as humanly possible” [1]. With a focus on
embodied listening to internal and environmental stimuli, the practice integrates
somatic awareness and energy exercises, listening meditations, and sound-making    
exercises that build upon an initial set of text-based pieces Oliveros created known as    
Sonic Meditations, with the Tuning Meditation (TM) being one of the earliest and
most widely-practiced. The Deep Listening community has grown through regular
workshops to include thousands of past and current practitioners, and is kept alive
through certified Deep Listening instructors, including the second author.
3 System Description
The project was created in the DisPerSion Lab at York University in Toronto, an
interdisciplinary research-creation space outfitted with a multichannel audio system.
For dispersion.eLabOrate , 12 channels mounted on floor stands were employed, with   
positions chosen in order to mitigate extraneous feedback, while facilitating intended
feedback between the generated audio and the array of ceiling mounted microphones.
The array of 3x3 omnidirectional microphones ensures participant input is evenly
sensed throughout the space. The TM asks participants to inhale deeply, exhaling on a
note/tone of their choice for one full breath. On the following exhalation, participants
will then match a tone that another has made. Next, a new tone should be held that no
one else has made. This alternation between matching others and offering new tones
repeats until a natural end point is reached, as determined by the group listening
dynamic. In this project we also allowed participants to choose between noise or tone
at each cycle. As this was the primary context in which the project was intended, all
major aesthetic considerations and testing revolved around ensuring the piece could
be performed without distraction. The role of the system is to extend the potential for
the piece, rather than overtake it as a singular focus.
Rory Hoy and Doug Van Nort
Fig. 1. System diagram of dispersion.eLabOrate depicting signal and data flow from  
microphones, through pitch analysis, to audio generation, and output to room
The audio is received by the computer via an audio interface connected to the
microphones. Incoming audio is then accessed by Max/MSP, where the analysis and
audio generation occurs. The system is comprised of 9 modules, one for each of the
microphones in the array. Each module consists of a sinusoidal oscillator, as well as a    
white noise generator that is fed into a bandpass filter. The system’s output is located
spatially with regards to the location of the microphones within the room, placing
each of the 9 output signals in relation to their input source. This placement promotes
feedback at the localized level between a modules output and accompanying
microphone, while additionally influencing adjacent output and microphone pairs.
The modules contain states which alter the behavior of audio generation and its    
listening parameters. The four states are, direct ,smooth ,average , and freeze . These
states differ in the way they map values to the module’s oscillator and filter, and    
change parameters for data thresholding. States can be set individually for each
module, allowing varied behavior within localized areas of the room. Each audio
input is analyzed for fundamental frequency, and pitch quality (an estimation of    
analysis confidence). Fundamental frequency is calculated by the zsa.fund method [5]
and pitch quality estimation is extracted using the yin algorithm [6]. Yin was not used
for fundamental frequency tracking as it was found to increase feedback past a
desired level, hence the use of the FFT-based method. The 9 separate modules receive
the fundamental frequency and pitch quality from their respective microphone, which
are then sent to the module’s oscillator and filter. The fundamental is used as the
desired frequency for the oscillator as well as the centre frequency for a resonant band
pass filter. Values are only sent if a defined threshold for pitch quality is passed
(default 0.2), and pairing this quality gate with a noise gate on the original
microphone signal avoids having unintentional ambient stimulus/noise as input.
Moving between ostensibly simple states results in a potentially drastic difference of
behavior for the system’s output. Direct causes the analyzed fundamental frequency  
An Ecosystemic Approach to Augmenting Sonic Meditation Practices
to be immediately reflected in the oscillator and filtered noise. Smooth sends values to  
the output sources ramped over time (default 50ms). Average sends out values to the  
sources after calculating a running mean during a given time window (default 200ms).
Freeze implements spectral freeze and sustain techniques [7], triggering them when  
input passes a set pitch quality threshold and pitch quality duration (default 1s). In    
addition to gating data flow, the pitch quality value is used to crossfade between the
two audio generation sources of each module. Low pitch quality is perceptually tied
to “noisy” input stimulus, while high pitch quality will result from clear tones. When
the quality value is low, output will be closer to the filtered noise. If the quality is
high, output will be towards the generated pure tone of the oscillator. Thus the      
resulting output of a module is congruous with the timbral quality (ranging from tone
to noise) at any given mic. Reverb was added to accentuate the spatial aspects of the
audio generation and was also controlled by the analyzed fundamental frequency at
the module level. Low frequency was mapped to a long reverb time, while high
frequencies were mapped to a short reverb time.

4 Evaluation and Discussion
4.1 Tuning Meditation User Study
In order to systematically examine the perceived influence of dispersion.eLabOrate  
across its four states, a user study was conducted with five volunteers joining the two
authors, for a total of seven participants. The TM ran five times in a row: first without
dispersion.eLabOrate ’s sensing to establish a “ground truth”. The four following runs
implemented the system states, moving through direct, smooth, average, and freeze .   
Participants were allotted time to write personal comments and rest in between each
run. A survey was completed after the final run and before group discussion, to avoid
biasing personal reflections on the experience. The survey utilized a five-point Likert
scale, with the following questions for each run: Q1: During this piece/experiment,
could you differentiate any electronic sound output from that of human performers?
Q2: During this piece/experiment, could you recognize any tones/noises being
matched (either yours or another person's) by another human performer? Q3: During
this piece/experiment, could you recognize any tones/noises being matched (either
yours or another person's) by electronic sound output? Q4: How confident are you
about your recollection of run #N and related ability to answer these questions?
The responses show a trend in participants reporting less ability to recognize tones
being matched by fellow humans (Q2) in successive runs. This may be a product of
becoming more comfortable with the system as another participant/agent within the
piece. This is supported by participant comments: noting in run #3 that the
“electronics faded in background - less interested in triggering the electronics than  
using it as a source for unique tones” and in run #5 that “the electronics lost novelty  
(and) acted more as (a) participant in my mind”. The same participant noted of run #3
that “the electronics held (the) same importance as other performers”, whereas they    
earlier reported that they “spent (the) first few breaths figuring out what tones would
trigger the electronics”. Another participant noted in run #3 that “the machine felt like     
it was part of the sound field, but in a different way to the rest of the participants”
Rory Hoy and Doug Van Nort
whereas by run #5 they noted that the run “had a very satisfying ending when the
machine faded out with the group”, pointing to its collaborative place within the
piece. While the surveys required a recollection of every run from the 1.5 hour
session, each participant reported high confidence in this recollection across every
run. The general trend of recognizing less human matching (from the quantitative
data) and increasing regard for the interactive system as an agent to be listened and    
responded to (qualitative comments) is quite interesting. This certainly must be
related to an increasing familiarization with the system, but it also may be related to    
the specific ordering of the states: while all system output was normalized to the same
volume (balanced to blend with group sound), state changes from runs 2-5 correlated
with an increased sustain of system output due to state behavior. This greater sustain,      
and related self-regulatory feedback, seemingly contributed to the increased sense of
presence reported, with participants noting that the sound was “less chaotic” and
contributed to the larger environmental context of the experience. This speaks to the
influence of the ecosystemic design on perceived agency.
4.2 Discussion
The “Tuning Meditation” Deep Listening piece is itself an emergent dynamical
process that could be seen as an acoustic form of an interactive sonic ecosystem.
When intersected with dispersion.eLabOrate , the result is a piece positioned within
the ecosystemic model through shared human/technological influence. Due to the
flexible number of participants that may take part in a performance/session of the TM,
variances in voice density may be quite apparent or perceptually unnoticeable due to
aligning breath cycles. “Feedback” is inherently present through the act of matching
another’s output and the ambient qualities of the piece are established by all
participants acting to form a self-regulating system. Additionally the piece is ran until
collective stimuli concludes, further positioning the importance of ambient content to
drive the output of the human “system” established between participants, noted in the
user study through the comments of the “machine” ending the piece in run #5. All of
these participant interrelationships are extended through the addition of the generated
audio of dispersion.eLabOrate, as behaviors not typically found in the original piece  
and “vocalizations” not achievable due to human physical constraints emerge from
the system. This was evident in the user study through comments that regarded the
environment as another agent, and has been further apparent to the authors across test
sessions. Incorporating behaviors such as freeze , the system is able to sustain tones    
across gaps in participant stimuli, allowing continuous output to take place in the
piece even within small groups. This was shown to have a noticeable positive effect
on group coherence, with participants noting that the “interactive sound became more
meaningful”. While these extensions of human ability are present within the system,
an important design consideration was that output was still bound to the activity
provided by participants. System output is reliant on a “communal breath”, as the
cyclical deep exhalations on unique or matched tones drives the system’s audio input.
The system is at once an actor taking part in the meditation along with the other
participants, as well as the generator of the environment in which it resides. Each of
the system’s states presents a different possible form that sonic ecosystems can take
An Ecosystemic Approach to Augmenting Sonic Meditation Practices
Fig. 2. dispersion.eLabOrate was developed in the context of a project that explored different  
input sensing, media output displays and interaction designs for augmenting sonic meditations. 
within an interactive audio environment. Where the direct state results in the real time  
modulation of input audio mapped to output found in systems that employ a linear    
communication flow, smooth ,average , and freeze move the system's behavior away  
from this one-to-one mapping. Smooth results in a behavior that is clearly linked to,  
but perceptually disjointed from participant input. This state results in “audible
traces”, where generated output hangs in the environment and is perceivable over a
duration of time. These dynamic gestures of sound lack stable forms and fluctuate
around the system’s input (to varying degrees given a certain delay time). Audible
traces continue within both the average and freeze states. Average behaves similarly  
to smooth as its calculation window begins, and upon receiving a number of samples  
will begin to reach a steady-state and settle around a small range of tones. At the end
of the averaging window, the system's output may jump drastically to the current
input fundamental. This cycle of progressively static and eventually collapsing forms
is again self-referential in relation to feedback detected by the microphones,    
modulated and informed by the input of participants within the space. Freeze became  
arguably the most consistently intriguing of the states, as hanging tones and rhythmic
sustained patterns were formed as a product of a surpassed pitch quality threshold, in
combination with surpassed specified quality duration. The frozen tones were also
spatialized to the location of the microphones detecting them, placing the live system
output and spectral capture of sound within the same point of emanation. Generated
output possibilities including beating waves and cyclical “following” behaviors
caused by new frozen tones being generated from past output, given their proximity to
adjacent microphones and source positions.
Reverb acted in facilitating positive feedback within dispersion.eLabOrate ,  
allowing the system to further obtain the self-observing behavior that is characteristic
of sonic ecosystems. Reverberation time is tied to the incoming analyzed frequency of
each of the microphones, where low frequency content results in a high reverberation
time and high frequencies cause a very short reverberation time. If a continuous low
tone were to be captured by the system, the reverb time would be quite large (~10
seconds). This continuous tone could then be disturbed by input at a higher frequency
than previously generated, causing the output of the system to spike in frequency,
reducing the reverberation time, and collapsing the generated sonic structure. This
behavior reflects Haworth's perspective on sonic ecosystems, “which de-emphasizes
stability and regulation in favour of imbalance, change and disorder” [3].
Rory Hoy and Doug Van Nort
5 Conclusion and Future Work
Created as a system to augment the sonic output of the Deep Listening “Tuning   
Meditation”, dispersion.eLabOrate drew upon an ecosystemic design approach in its
methodology, aesthetic output, and system considerations. Approaching perceived
sonic agency as a symbiotic relationship between human and machine output, the
work succeeds in placing human actors as integral to and active in the analyzed room
ambience. This active participation within the environmental ambience is reliant on
the generated output from the system, informed by chosen states for varied or uniform
system response. Through the states direct, smooth, average, and freeze,  
dispersion.eLabOrate sculpts the environment participants are engaged within, while
becoming an active participant itself within the framing of the piece. Cycling through
these module states illustrates the potential for multiple interaction paradigms and
system outputs from simple mapping changes within a single environment,
highlighting the complex role of collective human action in the presence of feedback
as found within the ecosystemic approach. Currently the system has the capability to
define localized behavior within the sonic ecosystem through its individual modules
which are related to each of the microphones in the space. The dry/wet content of
reverb was not connected to any input analysis feature for this project, yet
incorporating a reactive nature to this parameter could yield perceptually interesting
variations for dynamically defining the shape of the sonic ecosystem at a localized
level. This could also be applied to the function and assignment of states at the    
module level, defining multiple sonic locations in which output and systemic behavior
varies, yet their collective output and proximity coalesce into a cohesive sonic
ecosystem. This could allow autonomous reactive changes to occur as a result of      
decision making from the system, as determined by the structure of an exercise such
as a sonic meditation, or through participant input. Such dynamic localized behaviour
(either pre-set conditions or reactive) points towards exciting applications of sculpted,
diverse, and mutating sonic ecosystems for use through augmenting participatory
listening/sounding pieces such as those found within the Deep Listening tradition.
References
1. Oliveros, P.: Deep Listening: A Composer’s Sound Practice. iUniverse, Lincoln. (2005)
2. Di Scipio, A.: ‘Sound is the interface’: from interactive to ecosystemic signal processing.
Organised Sound, vol. 8(3), pp. 269--277, United Kingdom (2003)
3. Haworth, C.: Ecosystem or Technical System? Technologically-Mediated Performance and
the Music of The Hub. Electroacoustic Music Studies Network (2014)
4. Waters, S.: Performance Ecosystems: Ecological approaches to musical interaction.
Electroacoustic Music Studies Network (2007)
5. Malt, M., Jourdan, E.: Zsa.Descriptors: a library for real-time descriptors analysis. Sound
and Music Computing Conference (2008)
6. de Cheveigné, A., Kawahara, H. YIN, a fundamental frequency estimator for speech and
music. The Journal of the Acoustical Society of America, vol. 111(4), 1917--1930, (2002)
7. Charles, J.-F.: A Tutorial on Spectral Sound Processing Using Max/MSP and Jitter. 
Computer Music Journal, vol. 32(3), pp. 87--102, (2008)
... We also build upon a more recent piece, dispersion.eLabOrate, that explicitly augments another of Oliveros' sonic meditations. The eLabOrate project features a room-scale ecosystemic augmentation of the Tuning Meditation (Hoy and Van Nort 2019). In eLabOrate, a group of human participants are joined by a room-scale agent which listens to the collective through a microphone array, analyzes what it hears in software, and then generates sounds according to the meditation instructions. ...
Conference Paper
Full-text available
This paper presents the Interdependence Gestural Agent (Intergestura), an electroacoustic music-performing multi-agent system whose design is based on sonic meditation principles, adapted to incorporate principles of gestural listening. The Intergestura system is comprised of a human performing real-time granular synthesis on a digital drawing tablet, and a pair of software agents that behave according to the rules of a certain sonic meditation piece. The agents behave in a call and response manner, listening for both the physical and sonic gestures of one another and of the human performer. Through this behaviour, performance with the agent affords an experience in which the participant deeply focuses their own attention, awareness and listening, similar to the conditions produced when performing the original text score with a group of human performers.
... This FFT freeze object is able to freeze sound in real time by resynthesizing among several spectral frames at once [16]. The FFT freeze method is ideal for our design because it feeds the sound signal back while eliminating the possibility of uncontrolled acoustic howling [17]. Because the analyzer~ object in Max triggers the FFT freeze object based on an amplitude threshold, it is important that the amplitude produced by the shaker as captured by the contact microphone is less than the amplitude of the acoustic piano. ...
Article
Full-text available
Music is understood as a dynamical complex of interacting situated embodied behaviours. These behaviours may be physical or virtual, composed or emergent, or of a time scale such that they figure as constraints or constructs. All interact in the same space by a process of mutual modelling, redescription, and emergent restructuring. (Impett, 2001) As will be self-evident, this is a work in progress. The text is structured around the paper presented at EMS07 in Leicester, but draws freely on a preceding presentation, at the Sonorities Two Thousand +SIX Symposium 1 , to provide context for and elaboration of, some of its main points. The notion of the performance ecosystem is presented as a fruitful tool for the understanding and analysis of current musical activity 2 . It is suggested that this mode of understanding usefully alerts us to connections with historical music practices, while enabling us to address the realm of the virtual. The paper will look at a selection of practical projects and performances which have formed a nexus of activity at the University of East Anglia over the past five years, addressing contiguities between composition and performance, performer and instrument, instrument and environment. The bulk of this activity has been the work of research students and academic staff based at UEA, but a continuous programme of visiting artists, performers and lecturers has significantly influenced that activity, and funding from AHRB 3 and EPSRC 4 has informed the focus of the practice. The instances selected 5 therefore include a variety of hybrid virtual/physical feedback instruments developed specifically under the banner of the latter funding stream, while also presenting glimpses of the work of visiting practitioners such as Nic Collins and Agostino di Scipio which have provided context and inspiration. Like much of what I do this paper is driven by a sense that those whom compose or perform, particularly in highly technologised environments, are wont to celebrate the technological, and to be reductive about (or at least less attentive to) the nature of music as an activity (as
Article
Full-text available
This paper takes a systemic perspective on interactive signal processing and introduces the author's Audible Eco-Systemic Interface (AESI) project. It starts with a discussion of the paradigm of ‘interaction’ in existing computer music and live electronics approaches, and develops following bio-cybernetic principles such as ‘system/ambience coupling’, ‘noise’, and ‘self-organisation’. Central to the paper is an understanding of ‘interaction’ as a network of interdependencies among system components, and as a means for dynamical behaviour to emerge upon the contact of an autonomous system (e.g. a DSP unit) with the external environment (room or else hosting the performance). The author describes the design philosophy in his current work with the AESI (whose DSP component was implemented as a signal patch in KYMA5.2), touching on compositional implications (not only live electronics situations, but also sound installations).
Article
An overview on the techniques on how to perform spectral audio treatments with the use of matrices in the environment Max/MSP/Jitter is given. Then, a presentation regarding the extensions of the phase vocoder is shown where the use of an advanced graphical processing in performance time is possible. Spectral analysis and synthesis were done through Fast Fourier Transform (FFT) and Inverse-FFT algorithms. Some improvements were described where the traditional phase vocoder were used in both the real time and performance time. Lastly, the extensions regarding "freeze" which is a popular real-time spectral processing method has been presented so as to illustrate that matrix processing can be useful in the context of real-time effects.
Article
An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.
Deep Listening: A Composer's Sound Practice. iUniverse
  • P Oliveros
Oliveros, P.: Deep Listening: A Composer's Sound Practice. iUniverse, Lincoln. (2005)
Ecosystem or Technical System? Technologically-Mediated Performance and the Music of The Hub
  • C Haworth
Haworth, C.: Ecosystem or Technical System? Technologically-Mediated Performance and the Music of The Hub. Electroacoustic Music Studies Network (2014)