Content uploaded by Agostino Di Scipio
Author content
All content in this area was uploaded by Agostino Di Scipio on May 18, 2015
Content may be subject to copyright.
Agostino Di Scipio
COMPOSITIONAL MODELS IN XENAKIS’ ELECTROACOUSTIC MUSIC(*)
In Iannis Xenakis’ output, electroacoustic music plays a role that is quantitatively
marginal but quite meaningful in its content. In the following observations, my aim is
to evidence that highly relevant aspects of Xenakis’ contribution to today musical
thinking are found in electroacoustic works like Concret PH (1958), Analogique A-B
(1958-59) and Bohor (1962), up to La Légend d’Eer (1977), Mycenae-Alpha (1978),
Voyage absolu des Unari vers Andromede (1989) and lastly the recent Gendy301
(1991) and S709 (1994).
Preliminary observations
In electroacoustic music, compositional strategies are mediated by tools of work and
thought - i.e. by a tèchne, an entire world of techniques and technologies - often
considered foreign to the field of the musicological discourse. However, this tèchne
represents an essential expression of the knowledge which converges in the
compositional process. As Pierre Schaeffer wrote, “...les idées musicales sont
prisonnières, et plus qu’on ne le croit, de l’appareillage musicale...”(1). The elaboration
of the sound material and the strategies of musical design are captured in actions,
procedures and tools which actually permit us to “record” and study the compositional
process and to observe how the composer's ideas are transformed into audible
musical objects.
My analysis below lean on the notions of model of sound material, model of musical
articulation and control structure :
· A model of sound material is the operative description of “composing-the-
sound”. The analysis of electroacoustic music cannot waive the study of this
aspect so essential and distinctive of this type of compositional praxis.
Characterizing a model of sound material shows the features of sound
(microstructures) that are cognitively available to the development of musical
form (macrostructures), and may illustrate the theory of sound implicit in the
way in which the composer represents, conceives of and works on and within
the sound material.
· A model of musical design is the operative description of “composing-with-
sounds”. It illustrates the strategies of articulation of musical form, i.e. the way in
which the material is worked on, the way by which the overall form is developed
out of smaller units and components.
printed in Perspectives of New Music, Seattle, USA, 36(2), 1998. Original italian
publication: "Da Concret PH a Gendy301. Modelli compositivi nella musica
elettroacustica di Xenakis", Sonus, 7
(
1-2-3
)
, 1995
· A control structure represents the conceptual interface, as well as the
operative link, between microstructures and macrostructures. It implements the
relationship between the conception of material and the conception of musical
form, and thus illustrates the features of material actually used (from among
those available) in a certain musical construction.
This approach helps us, I believe, to tackle questions that are fundamental in music
analysis: what is the material of the musical work under observation? By what
methods was the material worked on? How did this way of working finally bring forth
the perceived musical structure? What relationship is there between sound and
music? Thus, it becomes possible to grasp meaningful features of the music theory
and the aesthetic hypotheses underlying the work in question. Music analysis is
understood here as a question of characterizing and evaluating the musical
knowledge mediated by the tèchne in the process of composing, and the way in
which such a mediation is accomplished(2). This is crucial as long as the particular
tools for such a mediation are consciously chosen or even specially designed by the
composer - just like in Xenakis' case. ***
Xenakis’ work with electroacoustics has developed in two main phases: first, at Pierre
Schaeffer's Groupe de Recherches Musicales, in Paris, from 1957 to the middle of
the 60’s; later, at the Centre Etudes Mathématique Automation Musique (CEMAMu),
founded near Paris in 1966 by the composer himself together with mathematicians
and researchers in computer science.
Except for Analogique B (for tape, eventually superimposed on Analogique A, for
strings), Xenakis' work at the GRM resulted mostly in pieces of musique concrète. I
use the term here in its oldest sense (1948), pointing to a transformation of the
compositional process: “...une inversion dans le sens du travail musical... il s’agissait
de recueillir le concret sonore, d’où qu’il vienne, et d’en abstraire les valeurs
musicales qu’il contenait en puissance”(3). Bohor (8-track tape) is an outstanding
example of such an attitude, a powerful, compact, sonorous fresco more than 23
minutes long, delirious and violent in its materic magma(4). A relevant characteristic
here is that this music is void of apparent phrase-like articulation, void of
recognizable logical progression(4). This is perhaps explained with Xenakis' decision
to focus on the potential for articulation in - rather than with - the sound material
itself. I would like to illustrate this potential as explored in the composition of Concret
PH and Analogique B - two works which, in rather different ways, both represent a
mediation between the noisy violence of Bohor and the constructive, mathematico-
philosphical approach in Xenakis' instrumental works of the same period, such as
Achorripsis (1956-57) and the computer-generated ST/10-1 (1956-62).
From Concret PH...
Concret PH (1958) is a 2’45” long textural composition, a "cloud" filled with splinters
of sound only vaguely differentiated among themselves. As is well known, this piece
was conceived as an introductory event in Le Courbusier's Philips Pavillon, presented
at the Brussells World's Fair in 1958 (the Pavillion also included Varèse's only tape
work, Poème Electronique).
Sound design for Concret PH followed three steps. As a first step, the sound of hot
coals and burning material was recorded on tape. As a second step, very short
chunks were extracted from the recording and isolated from their original context.
Each chunk here corresponds to a single crackle, to a single creak of the coal in
consumption - noise bursts lasting no more than few hundredths (sometimes even
few thousandths) of a second. As is expected, such sounds have a very large
spectrum (see fig.1). Indeed, at this level the determination of frequency becomes
dependent on the duration: the shorter the sound impulse, the wider the frequency
band. (In other words, following Heisenberg’s "uncertainty principle", a precise
localization in the time domain causes indetermination in the frequency domain). As a
consequence, frequency and its perceptual attribute, pitch, are hardly controllable
here, as it is impossible for human ears to integrate differences of pitch and
amplitude in such brief moments(5).
As a third step, the short noise bursts were assembled to create a longer texture, by
piecing together innumerable scraps of tape. A series of such textures was obtained,
each having a particular temporal density dn = kn/Δt. Textures were then submitted to
two distinct strategies of densification:
· layering of m copies of the same texture: D = mdn (density of microevents
controlled by means of a geometric series).
· layering of different textures each with its own density: D = ∑ndn.
In both cases, the result of layering is a qualitative enrichment of the sound texture,
heard as the fluctuating timbre of a rough dust of sound, with rare periodic patterns.
In fig.2 readers can see a sonogram of the entire recording of Concret PH(6). Two
types of texture can be distinguished, one made of very short noise bursts (wide
frequency bands, with peaks at around 6000-9000 Hz), the other made of slightly
longer bursts (narrower frequency bands, with peaks at 4000-5000 Hz). Often the two
types overlap, e.g. in fragments 40”-50” (fig.3a) and 110”-120” (fig.3b). Occasionally,
one of the two is more in evidence - the first in fragment 30”-40” (fig.3c) and the
second in the brief excerpt 80.9”-86.6” (fig.3d) and later in fragment 100”-110”
(fig.3e)(7).
Features found in large-scale spectral analysis are also found at smaller scales. For
example, figure 3c (fragment 30”-40”) shows a sonographic snapshot which is quite
similar not only to that of the entire piece (fig.2) but also to that of a very short
particular only 0.3” long (fig.4, 37.7”-38”). Fig.5 illustrates this phenomenon in
fragment 90”-100” (fig.5a), and in its ever-smaller particulars (fig.5b, 94.5”-96”; fig.5c,
94.8”-95.2”). In short, this “zooming in” reveals the properties of a self-similar object,
a surface in relief that is fractal in dimension: something halfway between a plane
and a solid. An object of this kind, of dimension H = 2.6666667, is illustrated in fig.6(8).
The overall piece presents a rather simple macroscopic shape, going from the
sonorities of the first type of texture, in evidence at the beginning of the piece (fig.7a,
15”-20”), to the slightly less fragmented ones of the second type, in evidence towards
the end (fig.7b, 140”-145”). ***
Xenakis’ interest for particular acoustic phenomena - such as the shrill sound of
cicadas, the drumming of rain, the human noises of crowd scenes, the thunder of
battle(9) - is well-known. However, I would rather avoid speculation here as to the
possible implications behind the sound of burning materials. The material of Concret
PH, as we have seen, does not so much lie in this acoustic phenomenon as in the
audible result of the dissection, selection and composition of innumerable noise
impulses. That is the material handled by the composer, something which is itself
designed, and largely devoid of perceptual qualities revealing its actual phenomenal
origin (indeed, listeners tend to perceive these sounds as the sound of breaking
glasses). That material is then manipulated, as we have seen, by means of simplistic
statistical principles, which in the end lead to the rather simple form of the whole
piece.
It is to be emphasized that each fleeting creak of sound in Concret PH is a point of
catastrophe and discontinuity, it represents a tiny explosion which transforms a bit of
matter into energy. Not by chance, then, is the overall form of the piece so simple
and static: simplicity at the macro-level allows the listener - and the composer
himself! - to turn his/her attention to the morphology of the scraps of sound this music
is made of, to shortest processes by which matter is transformed into energy. Her/his
attention, then, is turned towards the form of each of these sonic events.
In line with the musique concrète approach, here the model of sound material is a
mixture of manipulative procedures through which - with a definition found in the
system theory literature - "....noise is transformed by learning into a sign"(10). In this
annotation, "learning" is perhaps the most important thing: it means interior accretion
and awareness of the auditory experience, which finally becomes a project of art
through choice and isolation of particular features of the acoustic phenomenon.
These are the features of the very material in the composition of Concret PH: every
single component of the whole is marked by a creative intention, by a intentional act
that changes its nature and meaning. What we call material here, then, tends to lose
its connotation of something natural and becomes a designed object itself - an
artifact.
In spite of the simple cloud-like structure, the unfolding of Concret PH is rather
extraneous to explicit figurative criteria and devoid of mimetic or narrative references.
Despite Xenakis' debt to the Schaefferian aesthetics, there is something here which
sets Xenakis at the margins of musique concrète. I refer to an attitude which renders
the musical work a definitive artificium, the resultant of a project of art addressing
many time scales in the structure of music. Incidentally, this is mirrored - though
hiddenly for the ear - by the self-similarity of the time-frequency representation of the
piece as a whole.
... to Analogique B
Analogique B (1958-59) marks a departure from the concretist approach of pieces
like Concret PH and Bohor (this latter, however, was composed later, in 1962).
Brought to conclusion partly at the GRM and partly in Hermann Scherchen’s studios
in Gravesano, Analogique B represents the very first attempt at welding musical
invention to a conception of sound freed from classical acoustics.
This “electro-magnetic music for sinusoidal sounds”(11) is made up of many brief
sinusoidal signals recorded on tape. Here Xenakis' conception of sound lies close to
that put forth by physicist Dennis Gabor (and by Norbert Wiener, too). The possibility
is explored of composing sound by innumerable overlapping elementary signals -
sinusoidal sound grains. Such a possibility is expressed by Gabor’s series expansion:
s(t) = Σn,k an,k g(t-kT) ejnΩt
The elementary signal is represented by g(t). It has a real and an imaginary part - i.e.,
in Gabor's own description:
ejnΩt= exp - α2(t - t0)2 exp i2πf0t
where α is the parameter allowing us to establish both the duration of the grain and
its band width. For Gabor, therefore, “...elementary signals are harmonic oscillations
of any frequency fo, modulated by a probabilistic pulse”(12). A Gaussian curve is
utilized to envelope the pulse, as in fact it allows one to locate the elementary signal
in both time and frequency with minimal spectral dispersion. Fig.9 shows Gabor’s
“grid,” where the time/frequency continuum is quantized into smallest cells - called by
him logons - of area ΔtΔf. An elementary signal is represented by a corresponding
logon in the grid. The three graphs in fig.9 show the intuitive notion that the frequency
spectrum depends on the temporal pattern of elementary signals, while the density in
the particular time span remain the same.
Xenakis' approach is different with regard to the envelope curve of the grain. He
leaves it constant - a rectangular envelope. Shown in fig.10 are a Gaussian grain and
a rectangular grain, with their relative frequency spectra. As can be imagined, the
latter is perceived as a brief explosion of noise in a frequency range centered around
the frequency of the elementary signal.
The rectangular grains in Analogique B have a fixed duration of 0”04; amplitude and
frequency values are localized on a plane broken down into tiny cells with an area of
ΔgΔf - Xenakis calls it a “screen”. In order to obtain dynamical sounds, a "book" of
such “screens” is used, at a distance of Δt = 0”5 (see fig.11).
The global characteristics of “screen” are 1) the density of grains in the volume ΔgΔfΔ
t; 2) the shape of grain distribution; 3) the degree of order/disorder in such a
distribution. The strategy employed for Analogique B refers only to the first of them. It
consists in a Markovian process implemented as a transition probability matrix
(TPM): the evolution of the sound texture is traced by the probability that at a certain
time t the screen’s parameters will be modified in respect to t-Δt. A simple TPM looks
like the following(13):
X Y
X 0.2 0.8
Y 0.8 0.2
and represents the probability
· that symbol X will be followed by another X (20%);
· that X will be followed by Y (80%);
· that Y will be followed by X (80%) and
· that Y will be followed by another Y (20%).
In Analogique B, two TPMs were utilized:
X Y X Y
X 0.2 0.8 X 0.85 0.4
Y 0.8 0.2 Y 0.15 0.6
in the determination of amplitude, density and frequency values. Decision as to
whether the former or the latter should be used was made at any given time on the
basis of several rules which, for brevity’s sake, are not described here. X and Y are
associated with two sets of values selected from among 16 regions of frequency
(each corresponding to an octave), two sets selected from among four regions of
amplitude (in phones) and two sets selected from among seven regions of density (in
logarithmic units). Once the set has been selected, the particular values in a set are
chosen on a purely random basis.
To successfully predict the evolution of the parameters, it’s necessary to answer this
question: what is the system’s general tendency during a certain number of
transitions? In the case of our first example matrix, we have the following
relationships:
X’ = 0.2X + 0.8Y
Y’ = 0.8X + 0.2Y
so that, after only eight transitions, a stationary state is reached. Probability levels in
the stationary state are X = 0.5 and Y = 0.5. For the second matrix, instead, X = 0.73
and Y = 0.27 are reached. It is possibile to calculate the mean entropy of each TPM
in a stationary state as
H = (HXX) + (HYY)
in which HX represents entropy for X states of the TPM and XY represents entropy for
the Y states, calculated as
H = -∑ pi log pi
(pi are the transition probabilities established by the matrix).
A further macroscopic control utilized by Xenakis is called the exchange protocol
between perturbation states and stationary states. It is employed to determine TPM
internal parameter values for each section of the piece - 8 in all, for a total of only
2’35”. The exchange protocol, thus, establishes the alternation between sound
behaviors of growing entropy - or perturbations, P - and more static behaviors - ore
equilibrium points, E(14).
These details give us a pretty clear picture of the various levels in the compositional
process: 1) The granular representation of sound provides the composer with
minimum discrete elements; 2) Manipulating these elements (microcomposition)
results in the actual sound material for the entire work; 3) The concept of “screen”
represents the control device connecting microcomposition with criteria of short-term
(TPM) and long-term musical design (exchange protocol, P and E).
Like in Concret PH, here we see again a continuity of micro- and macro-level.
However, this time such continuity is caught in a rather formalized system, whose
states in time finally shape the resultant sound object. As with later examples of
algorithmic composition, the compositional approach here seems to consolidate in a
“mechanism” that the composer lets manifest itself(15). The perturbations that
temporarily disrupt the system's stability actually prove to be anything but an ulterior
means for manifesting the mechanism's functionality and consistency.
In short, the composition of Analogique B reflects the quantistic approach that
Xenakis borrows from Gabor(16) as well as the statistical methods that Xenakis also
utilized in instrumental works during the late 50’s.
***
In the composition of Analogique B, the quantization of the sound continuum down to
a finest time scale enables the composer to instantiate programmable, formalizable
compositional processes within the sound itself. Later, this has become typical to
granular approaches to digital sound synthesis. In general, while sound synthesis
based on Fourier’s paradigm - a summation of perfectly harmonic sine functions,
which are impossible to locate in time - lead to think of and work out the sound
material in terms of spectral characteristics, i.e. in the frequency domain only,
granular representations lead to work it out in its micro-time structure. “Composing-
the-sound”, then, requires the determination of time relationships among innumerable
finite elements. In Analogique B, the conceptual tool to accomplish that is the
transition probability matrix. More recently, composers have employed other
techniques, such as the random distribution within either static or dynamical
boundaries (as used by Curtis Roads and Barry Traux(17)), and mathematical models
of non-linear, chaotic systems (as used in my own compositional work(18)). In most
approaches of this kind, the morphological and perceptual properties of sound and
music are to some extent dependent on the coherence of micro-level processes,
whose functionality, then, is to bring forth the overall form of a sound object or
texture.
Albert Bregman, renowned scholar in the field of auditory perception, has pointed out
that granular representations can be pertinent in modelling dynamical sound events
of particular complexity (transient phenomena, turbulence, auditory images rich in
noise components and compounded of innumerable microscopic events). He adds,
however, that this is only possible if there develops an adequate description of the
way in which grains succeed and overlap each other(19). This is to say, however, if the
composition of grains is adequately achieved.
For many, the sonority of Analogique B is rather disappointing in comparison to the
theoretical implications in play. This is explained not only with the poverty of the
technical means available at the time of its composition. Indeed, it can be explained,
as we'll see later, also with the strategies themselves pursued by Xenakis: stochastic
laws seem to prevent the emergence of a higher structural level, they seem not to be
capable of bringing forth those “sonorities of second order” Xenakis expected to
achieve(20). The reason for this may be that his formalized system is not an eco-
system, i.e. it is not nourished by retroaction and “environmental” conditioning, it does
not change itself with the changing context. It has no (and neither does it desire)
memory. The relevance of this point is hardly trifling - and I'll come back to the issue
in my final discussion.
Still, emblematic in this music is Xenakis' gesture aimed at generating, by one and
the same process, both complex timbral entities and the overall form of the work, so
that the passage between micro and macro-structure is rendered continuous(21). As is
well know, a similar compositional attitude was dealt with by many composers at the
time of the realization of Analogique B. Xenakis contribution on these matters, rarely
understood, represents a position of a profoundly different nature compared to that of
Stockhausen, who, however, has been the most famous supporter of temporal and
structural unity in electroacoustic music(22).
Mycenae-Alpha
Mycenae-Alpha (1977) was the first music realized by means of the UPIC computer
system - the “polyagogic” computer unit of the CEMAMu. It was recorded on
monophonic tape although its public performance calls for either two or four
loudspeakers(23). On the UPIC, composing is carried out by tracing lines on a graphic
pad connected to the computer. The graphs are stored and then utilized etiher as
audio waveforms or as amplitude envelopes. They can also be utilized as pitch
envelopes (glissandi) or tempo curves(24).
The sound is generated by table look-up synthesis, the most straightforward standard
synthesis method known in computer music. The sound sample amplitudes are
looked up from a short array (wavetable) where the waveform samples are stored,
stepping through the arrey by a step σ proportional to the desired frequency:
σ = Np St (1/p)
wherein Np is the number of points in the wavetable, St is the sampling period and p
is the period of the signal to be generated. The method can be formulated in this way:
s(i) = A g(φ)
wherein g(φ) is the sample read from the array location φ, namely
φ = [φ + σ(i)]mod Np
As we can see, σ varies with the index of discrete time, i. In the UPIC system, σ is
controlled by the curves the composer intends to use as pitch profiles.
The “score” of Mycenae-Alpha (fig.12) consists of a diagram illustrating the temporal
progression (horizontal axis) of σ values (vertical axis). It gives no information as to
the actual pitches heard (except in case the wavetable utilized stores the samples of
a single-period sinusoid). Neither it gives information about the particular wavetable
selected for the synthesis. However, it shows that the piece is made up of 13
sections, each having a characteristic graphic outline. The shortest section is section
4, only 5” in duration; the longest is section 13, lasting 1’01”. Section 13 and 17 have
identical shape but different duration (1'01" and 24"). The low, fine weave in section 3
is almost perfectly mirrored in section 6, but the latter ends with different pitch
profiles. There is also a strong similarity between section 11 and 12. Many sections
feature tree-like shapes (like section 5, where each line follows an independent path).
Only the transition from section 9 to 10 is rather smooth, in all other cases the
beginning of a new section is marked by abrupt changes in the music.
From the point of view of computer music system design, UPIC represents certainly
one of the earliest examples of uniform and coherent music interface. However, there
likewise exist strongly constrictive aspects. I shall mention two of these from among
those which, in my opinion, betray an involution in comparison to earlier works:
· the notion of timbre appears reduced here to one “parameter” among others,
understood as the set of harmonics of a periodic waveform (as is typical of table
look-up synthesis); in earlier works, as we have seen, timbre tended to be
understood as dynamical form, the epiphenomenon of microcomposition;
· the distinction between “musical structure” and “sound structure” has been
reinstated (the fact that the same profiles could be used as either audio
waveforms or pitch functions, does not prove the contrary). That distinction was
implicitly put into question in the earlier works, which reflected a notion of
composition that shares little with instrumental and vocal music.
Similar observations also apply for Voyage absolu des Unari vers Andromede (for 2-
track tape), realized with an updated version of UPIC(25). Despite the different formal
reach (Mycenae-Alpha lasts 9’36” and Voyage more than 15’), Voyage too is divided
into sharply contrasting sections. Both works extensively feature glissando textures(26)
contrasting with constant-pitch structures. Overall, differentiated sonorities alternate
as from the following oppositions:
· sounds having wide-band spectra
· continuous spectrum: noise strias (at times rhythmically animated);
· discrete spectrum: pulse trains.
· sounds having limited-band spectra
· harmonic spectrum: smoothed pulse trains;
· almost-pure sounds: one single spectral line.
Introducing the UPIC system(27), Xenakis insists on the impact of computer graphics
in musical didactics, and on the potential of compositional design layed out
graphically. Using the UPIC, composition takes place first of all within the flat world of
the time-σ plane; auditory experience takes place only after this drawing of lines.
Although it has raised the interest of many(28), I find this approach a debatable one in
that it implies an a posteriori association of a sound pattern with a visual pattern. In a
sense, the sound is conceived almost as if it were the by-product of gestures
withdrawing from the flow of time - which, instead, is the essential dimension in the
experience of all acoustic phenomena, including music. (Notice, in section 8 of the
Mycenae-Alpha score, that some pitch profiles move back in time!).
In this sense, Xenakis’ work with the UPIC seems to conform literally to his well-
known statement that “time could be considered as a blank backboard, on which
symbols and relationships, architectures and abstract organisms are inscribed”(29). I
shall return later to the issue of time in Xenakis’ music, as it cannot be reduced - I do
think - to such a thoroughly reductionist position.
Sound as a stochastic phenomenon. From La Legend d'Eer...
In the first English edition of his book Formalized Music, Xenakis described an
approach to sound synthesis which is completely independent of the Fourier
paradigm. The approach was based on initial experiments Xenakis pursued at
CEMAMu and at Indiana University (Bloomington), and was further pursued in the
realization of La Legend d'Eer (1977), worked out at CEMAMu and at the Cologne
WDR, and later in the realization of Gendy301 (1991) and S709 (1994).
La Légende d’Eer is a long continuum of sound lasting around 46 minutes, recorded
on a 7-track tape(30). It is made up of both computer synthesized sounds and
recorded samples. The latter include the sound of African and Japanese instruments,
eventually processed at the Cologne studio (listening to the piece, these sounds
seems to resonate as from earlier electroacoustic works, especially Diamorphoses
and Orient/Occident). The computer sounds were obtained in part with the UPIC
system and in part with stochastic methods(31). At first, the music is a rather fine
thread of high pitched synthetic sounds. Later it gets denser and denser until it
becomes a massive texture mixing up several distinct sources. Processed
instrumental samples are present in an ever-increasing manner as the piece unfolds.
Except for the initial pitched sounds, most synthetic sounds in the piece are quite
noisy. In my observations below, the focus is on the stochastic synthesis methods
through which these sounds were achieved. Then I'll discuss the detail of the musical
application of those methods in Gendy301.
***
The sound representation domain addressed by Xenakis is the time domain of the
acoustic waveform. The idea is to directly generate the sound signal by using
probabilistic functions. The signal is seen as the path traced by a point animated by
incessant Brownian motion, capable of ranging continuously from more or less stable
periodic patterns to rather irregular curves devoid of periodicity. The mathematical
constructs Xenakis used to this end unite in a group of 8 methods of sound
generation, all of which, despite some differences, sharing the same strategy: to
establish a condition of initial disorder and introduce means by which it can either be
reduced or increased.
Reasoning behind this approach stems from two considerations:
· the complexity of natural sound phenomena cannot be reduced to the terms of
the Fourier’s paradigm because its limitations heavily condition the sonological
applications (both with analog and digital technology);
· sound design should be regarded as an act of creative imagination; it cannot
be relegated, therefore, to the simulation of known sounds(32).
The latter consideration is extremely important. In the XXth century, musical forms
have gradually been freed from the re-production of pre-established frames. The
inspiration behind a music of electronically produced sounds springs from extending
this freedom to the realm of the sound material itself. Therefore, simulation of pre-
established sonic materials (e.g. the sound of musical instruments) is excluded on
the basis of this position.
As mentioned above, in the approach of direct synthesis developed by Xenakis, the
instantaneous signal amplitude can be thought of as an aleatory variable and can be
defined in terms of the probability of occurence of allowable values. The whole of
such probabilities can be expressed by relating x value of variable X to the probability
P of its occurence. Thus, by means of
f(x) = P [X ≤ x]
we show that P is the probability that X will assume a value included in the interval [
∞,x]. We can also characterize the probability density function expressing the
probabilities of a continuous random phenomenon:
F(x) = ∫ x f(t) dt.
The functions most often used by Xenakis, are (see fig.13):
· uniform: f(x) = 1, with 0 < x < 1
· Cauchy: f(x) = α/π(α2 + x2)
· Gauss: f(x) = [1/(√2πσ)] e-(x-μ)2/(2α2)
· exponential: f(x) = 1/2 λ e-λ |x|
which can rather easily be simulated on the computer using simple algorithms(33).
There is a connection in the way Xenakis had conceived of sound material in
Analogique B and Concret PH and this digital sound synthesis approach. The
quantistic hypothesis he followed during the late 50’s is taken here to extreme
consequences. The quantization of the time continuum earlier reached the granularity
of elementary signals as short as few hundredths of a second. It is now even finer:
the entity subjected to compositional decisions is the digital sample itself, the
instantaneous pulse necessary for a digital computer to generate sounds. In Xenakis’
experiments, samples succeeded each other with a lapse of about Δt = 0.00002”
(sampling rate = 50 KHz). The signal was obtained by directly calculating the sample
values in discrete time, i.e. operating in a bi-dimensional space ΔtΔg. Frequency and
timbre characteristics, therefore, become the resultant of particular sample patterns;
they are to be regarded as emergent properties, i.e. epiphenomena of a process that
occurs in the lowest technically available scale of time.
Broadly speaking, Xenakis' stochastic synthesis belongs to a class of methods that
can be formulated quite simply as:
s(i) = f(s(i-k))
in which function f establishes the relation between the amplitude of the signal at time
i and its amplitude at time i-k. If f does not implement any consistent acoustic model,
then we may call this a non-standard synthesis approach.
In computer music, non-standard synthesis has been used not only by Xenakis but
also by composers like Herbert Brün (in his compositions realized with the
SAWDUST program) and Gottfried M.Koenig (SSP program). More recently it has
been utilized by younger composers (e.g. Paul Berg, Arun Chandra, Jonathas
Manzolli, Michael Hamman). The important point shared by all non-standard methods
is that the sound generating process depends on the composer’s arbitrary invention.
Sound material shifts into the realm of the possible - paradoxically appearing de-
materialized, virtual, i.e. not pre-existent to an act of creation and design. The
composer becomes integrally responsible for the musical artifact, by means of a
thorough continuity of the compositional approach. He composes sound and music at
once. A perfect instance of this is Xenakis' recent Gendy301.
...to GENDY301
In 1991, twenty years after his first experiments with direct synthesis, Xenakis took
up this research line once again. He marked his return by writing a computer program
called GENDY (GENeration DYnamique), which he wrote himself in BASIC and
which was later translated into C by his colleagues at CEMAMu. The program
implements an algorithm of sound synthesis called dynamic stochastic synthesis(34),
and is called up by an other program, PARAG3, which has the role of a higher level
control structure. Gendy301, a tape piece lasting 18'45", is the first work using
dynamic stochastic synthesis. The world premiere took place at the Montreal
International Computer Music Conference (October 1991). The piece is now available
on CD (Neuma 450-86), but there it bears the slightly different title of Gendy3. The
two recodings being perfectly identical, I think the title was changed because of
editorial aspects which are, of course, completely irrelevant to us here (in the files
available at Xenakis' Paris publisher, Salabert, Gendy3 is officially claimed to have
been premiered in Metz, France, on November 1991).
Dynamic stochastic synthesis entails distorting a waveform in time and amplitude,
calculating the amount of transformation through stochastic variations. It assumes
that the sound signal is traced by a series of waveforms J, each consisting in I linear
segments (fig.14a). The end coordinates of the i-th segment in the j-th waveform are
xi,j, yi,j. Phase continuity between waveforms j and j+1 is assured by establishing
xo,j + 1, yo,j + 1 = xi-1,j, yi-1,j
The method can be described by saying that the end coordinates of segment i in the
j-th waveform are stochastic variations applied to the end coordinates of the segment
i in waveform j-1. That is:
xi,j+1 = xi,j + fx(z)
yi,j+1 = yi,j + fy(z)
where fx(z) and fy(z) return positive or negative values, given an argument z (itself a
random number with uniform distribution, i.e. white noise). Samples are computed by
linear interpolation between the init and end points in each segment:
s(t) = s(t) + [(yi+1,j-yi,j/ ni,j]
where ni,j is the number of samples in the i-th segment. The segment duration is:
di,j = (ni,j-1) / Sampling rate
Therefore the j-th waveform will have a total duration (period) of:
Dj= ∑i di,j
We have three possibilities:
· transformation of the ordinates only
yi,j+1 = yi,j + fy(z)
xi,j+1 = xi,j
(which means: only the amplitude values are modified, thereby causing a change in
the spectrum);
· transformation of the abscissae only
yi,j+1 = yi,j
xi,j+1 = xi,j + fx(z)
(which means: alterations of Dj, which cause changes of fundamental frequency of
the sound, and, therefore, of pitch; eventually this also causes audio rate frequency
modulation, with related spectral enrichment);
· transformation of both coordinates
xi,j+1 = xi,j + fx(z)
yi,j+1 = yi,j + fy(z)
(which means: alterations of both spectrum and pitch; fig.14a and fig.14b show a
transformation of this type).
As the width of the signal is represented by 16-bit integers, the values of yi,j must be
kept within the interval [+-32767] to avoid saturation. Moreover, too extreme aleatoric
variations of xi,j can lead to wild frequency modulations; so they, too, must be kept
within a given range. To deal with these problems, Xenakis resorts to the notion of
elastic barrier, a control process with three arguments:
fx(z) <- MIR [fx(z), fxmin, fxmax]
fy(z) <- MIR [fy(z), fymin, fymax]
ni,j+1 <- MIR [ni,j+1, Nmin, Nmax]
yi,j+1 <- MIR [yi,j+1, Ymin, Ymax]
wherein fxmin and fxmax determine the margins of stochastic variations for the
abscissae, fymin and fymax determine the margins of the ordinates, Nmin and Nmax
determine the range of samples per segment (the duration range for di,j) and lastly
Ymin and Ymax equal to -32767 and +32767 (or lower amplitude values). This causes
a mirror-like reflection of excess values within allowable limits.
The PARAG3 program supplies the GENDY program with the following parameters:
· number of I segments per waveform;
· duration of the synthesis process;
· type of stochastic function fx;
· type of stochastic function fy;
· arguments of the elastic barriers;
for fx and fy, one can select among stochastic functions of the type already illustrated
(uniform, exponential, normal, Cauchy). In Gendy301, these functions are also used
at the level of the macro-structure. The PARAG3 program assigns a "time-field" to 16
different voices - or simultaneous synthesis processes; a field may be passive
(silence) or active (synthesis triggered) and its duration is calculated by exponential
law, based on a mean value D:
d = (-1/D) log(1-z)
(as usual, z is a random number of the interval [0,1]).
Overall, this formalization closely resembles the approach taken for Achorripsis and
ST/10(35), except that the musical parameters for the instrumental lines are now
replaced with initialization parameters for the stochastic synthesis. Like Achorripsis,
the large-scale framework of Gendy301 can be represented by a matrix (voices×time
fields) with a certain density of active cells, as, for example, in figure 15.
Some of the 11 sections in Gendy301 show a predominance of harmonic spectra,
with much or little glissando activity. This is the case in the section 1; fig.16 shows
the sonogram of its first 30”, where very wide harmonic spectra (partials up to 8000-
9000 Hz.) and the parallel striations of the harmonic glissandi can be seen(36). Wide
noise bands are found in other sections; in fig.17 we see the sonogram of a 7”
fragment corresponding to the passage between section 3 and 4 (ca. 5’10” from the
beginning). In section 4 (fig.18a), acoustic energy is statistically spread over a very
wide frequency range, up to 17 KHz (notice, however, the strange “hole” between 10
and 11 KHz). By observing fig.18a, we can also understand how dense the temporal
articulation is at very high frequencies; we can also glimpse a harmonic structure
which appears three times behind the wall of noise. In fig.18b the first of these three
interruptions in the noise band is enlarged, while in fig.18c we see the sonogram of a
sound fragment of only 0”5 - nearly white noise.
Sounds in Gendy301 are both new and ancient in the context of electroacoustic and
computer music. They are ancient insofar as they reflect the classical opposition
between harmonic spectrum and noise spectrum, as well as the proliferation of
glissandi so peculiar to Xenakis' orchestral works. And yet, they are new each time
as they result from a microcompositional process based on an indeterministic model -
and also because they are absolutely mobile, so dynamical as to reach quite
deafening extremes.
Discussion
There is a thread uniting pieces like Analogique B and Concret PH to Gendy301 (and
S709, not described here): Xenakis tends to create a mechanism that, once started,
exhibits itself in time, rendering auditorily explorable the potential of knowledge
captured in the theoretical premises and assumptions behind the model mechanism.
Xenakis makes electroacoustic music a vehicle for a theory of sound of a level
adequate to his theory of music. In this medium he seems to circumscribe the
compositional techniques of his instrumental music. Xenakis’ art then becomes an
utterly cultural fact - a creative gesture which no longer lean on materials pre-existent
to the artistic concept. This represents a most important challenge in contemporary
music, one which is both highly influential and yet very poorly understood. Also, the
approach implies the possibility of interpreting music technology in the special sense
of an integral poiesis, as a cognitive disposition for exploring possible results (sound
and music) rather than as a powerful means for the realization of musical ideas too
often highhandedly considered independent from the technical substratum. That
represents, in a sense, a condition sine qua non towards the reconciliation of music
theory and musical praxis, in our times.
Even in the work of composers reputed highly sensitive to the sound materials' inner
vitality, the ontological status of sound appears as that of something pre-existent, an
external object-nature subjected to human channeling and transformation.
Technology then plays the role of a microscope revealing the sound matter' most
intimate details. And still this matter is given before the act of composing, as a
natural, pre-existing process where the composer grasps details of musical form.
Gerard Grisey writes: “the object allows us to understand the process in its Gestalt,
and to effect a system of combinations”(37). So it happened that some have used the
computer to analyze pre-exixting sounds and to use the analysis data as a
compositional structure. Tristan Murail’s orchestral writing, for example, “simule (...)
des spectres naturels donnant à entendre la mètamorphose d’un son inharmonique
de cloche en un son harmonique de trompette”(38). Here, the technological medium is
important for its instrumental function - its computational “power” and precision. In a
way not foreign to widespread platitudes and beliefs, tèchne is still as an instrument
for the control of Nature: tools are exploited to put sound to use. In contrast, Xenakis
seems to conceive of technological means as a concrete way for the sonic matter to
become the very result of invention. The sound object is the result of composition,
and therefore is wholly comprised in the world of the artificial - it represents no pre-
existing nature to be contemplated in its perfection, nor it represents an aim thtat
results from an exploitation of natural forces.
What the sound material “materializes” is the creative intent from which it takes origin
and form. Musical form, then, begins with a network of relationships in which it is
difficult to glimpse a “syntax”, since this network does not determine relations of
cause and effect at the level of the elements (or symbols) of the musical flow, but at
the level of the minimal time units within sound. By acting “within” the sound rather
than “on” it, Xenakis’ mechanism aims at creating a network of relations on a pre-
syntactic (or sub-symbolic) level, allowing perceptually relevant data to emerge at a
time scale relevant to the listener. The music comes to life by means of operations
that take place before any syntactic norm can be recognized as such by the listeners.
This opens to a paradigm shift that I have tried to summarize in the following way:
· It is possible to reconcile and combine “composing-the-sound” with
“composing-with-sounds,” following a line of musical research uniting formalized
approaches (algorithmic composition) and qualitative exploration of sound and
music as perceived. This includes a blurring the opposition of form vs. matter -
the quantitative vs. the qualitative, the conceptual vs. the perceptual - and is to
be seen as a realm of artistic expression peculiar to electroacoustic music and
computer music, as it is only accessible through these media(39).
In a similar vein, Hugues Dufort writes: “Si tout le concret perçu par l’oreille est sous-
tendu par des relations abstraites, on ne voit plus la raison de maintenir une
distinction entre une composition musicale qui porterait les sons et une composition
musicale qui porterait sur le formes”(40). In Xenakis’ electroacoustic music, composing
means letting the form of sound and music emerge from lower level processes - be it
the level of sound grains (Analogique B) or the digital samples themselves
(Gendy301).
***
It must be pointed out, however, that Xenakis' methods fail to develop an evolutive
flow in sound matter. I shall conclude this discussion by examining this theoretical
knot having quite strong repercussions on the issue of musical time.
Even though a statistic process may have a direction, it is always
moving towards the mean - and this is exactly what evolution is not.
This statement - quoted from Edgar Morin(41) - helps us defining the epistemological
and conceptual coordinates in order to discuss the experience of musical time in
Xenakis' electroacoustic works. By starting from there, I also mean to affirm the
methodological necessity of skipping over Xenakis' own reductionistic definition of
time as "a blackboard on which one writes events and structures”. I would rather
concentrate on how he works and what he works on - than his declarations on the
subject matter. Declarations of intent sometimes are to be distinguished from actual
experience. The notion of a perfectly spatialized time seems fitting very well only to
works like Mycenae-Alpha which, as has been said above, are not at all entirely
representative of Xenakis’ electroacoustic works.
The preceding sections of this essay suggest that Xenakis, in his electroacoustic
music, is aiming but perhaps not succeeding at setting up a self-organizing system or
mechanism. That his mechanism does tend toward a self-organizing behavior, is
revelead by its being sensitive to the initial conditions set up by the composer: the
system behaves differently, though consistently, upon different intial settings. That
self-organization, however, does remain purely potential is revealed by the fact that
his mechanism is event-insensitive, i.e. unable to change its behavior upon the
occurrence of unpredicted states or events provoked by its own functionality.
This notion - that stochastic processes are event-insensitive - means that the
unexpected, the singularity of events does not become a source of information and
transformation, in favor of a levelling-off tendency reflecting the relentless increase of
entropic disorder (this is coherent with the world view proper to the classical
interpretation of the Second Principle of thermodynamics). Being memoryless,
Xenakis' mechanism dos not learn from the history of its previous states, it cannot
interact with the external, nor it can interact with its own history. As stated above, it is
not an eco-system - it has no context.
Paradoxical as it may appear, this state of affairs represents a deficit in “technological
efficiency”, a problem brought in by the theoretical-cognitive limitations of stochastic
laws - which in fact the composer adopted in order to deliberately break down
causality, i.e. the symmetry of before and after. Hence derives the need for very
simple overall formal shapes, with separate sections for each of which the
mechanism is reset with new input data. We might say that in this utterly formalized
music the event is forced by the external: all changes having some relevance for the
shaping of the musical form are fed in by an intention that ultimately transcends
formalization and stems, therefore, from intuitions left out of formalized processes.
For Xenakis’ mechanism cannot avoid being uprooted from its context, particular
occurences in the sonic matter leave no promises and open no temporal horizons;
they leave no traces of themselves in time. Singularity does not become catastrophe,
in the sense that it doesn’t cause a change of behavior by altering, even without
annulling, the laws governing the functioning of the mechanism. A state of
suspension ensues, moment by moment, in the flow of time as experienced by the
listeners.
If time, here, really were dynamical evolution, the mechanism's laws would then
require that the occurrence of unforeseen (unforheard) events caused the activation
of a self-organizational dynamics. Instead, the occurence of particular patterns and
textures of sound is not capable, in Xenakis' mechanism, of re-orienting the flow of
time. The occurrence is soon forgotten: the composer's mechanism denies time the
power of endowing the elements of the musical flow with a coefficient of creativity.
Xenakis' radical gestures reflect an epistemological need to combine algorithimic and
stochastic, determinism and indeterminism(42), and finally leads to an essential non-
linearity and fragmentation in the experience of time. His work reflects a world no
longer describable in terms of the order-from-order principle - with which Schrödinger
had identified a purely deterministic rationalism of Laplacean stamp - but a world
animated by the order-from-disorder principle, a world where incessantly things are
put in order, warding off the ever-deeper abyss of entropic disorder. Yet, Xenakis
does not manage to take a further step, toward the order-from-noise principle, i.e.
toward a world neither strictly coherent (algorithmic order) nor strictly incoherent
(statistic order) but, if anything, in a dynamic condition of chaos(43). There things
would find their (unstable) order, and the world would take form through the event,
through singularity. As is proposed by a positive interpretation of the Second
Principle: in self-organizing systems, an increase in entropy is a creative force,
bearer of isles of temporary order in the incessant flow of transformation.
Xenakis’ compositional models leave the sound matter in a condition of statistic
order, lacking any theory of the event capable of describing the constructive and
destructive dynamics of the experienced musical form.
Conclusion
From Concret PH to Gendy301, Xenakis is approaching an aesthetic-cognitive
paradigm that pushes his art right into the sphere of noise, as the reflection of the
violent Nature that is free will - not unlikely Lucretius' description in his De Rerum
Natura. The technology of the stochastic laws constrains him within the margins of
disorder and statistic order, before any chance for true evolution can arise from the
sound. The events and discontinuities that nourish the musical form remain largely at
the mercy of a demiurge, not comprised in the criteria of the mechanism itself.
This music incarnates the utopia of an art which aims at resolving the dialectic
between materials and form - between Nature and Culture - by means of an integrally
constructivist disposition. This I call an instance of integral subjectivity, resulting in
works of art which are thoroughly artifacts: nothing in the work pre-exists to the
artist's action. Constructivism, here, takes the form of the objective, “natural”
manifestation of the mechanism; but the latter is designed and build up by the subject
itself. The dynamis - the living force - which in principle could let the mechanism
reveal itself remains largely left to the incursion, from the external, of the subject. The
techno-logy (knowledge in use) behind this music testifies to the intelligence of an art
in which chance and necessity are perfectly integrated.
Notes
* This revises and expands a previous essay published in Italian in 1995. The
author wishes to thank the editors of Sonus-Materiali per la Musica Contemporanea
(Potenza) for permitting translation of the unrevised parts.
1. P.Schaeffer, Traitè des Objects Musicaux. Essai Interdisciplines (Le Seuil,
Paris, 1966); p. 16-17.
2. On the issue, see my paper "On the Centrality of Tèchne for an Aesthetic
Approach on Electroacoustic and Computer Music", Journal of New Music Research,
24(4), 1995, p.369-383.
3. P.Schaeffer, Traitè..., p.23.
4. At the time, Bohor was one of the most radical attempts at annulling the linear
articulation in Western music. Another tape piece that was just as radical at the time
was Fabric for Che (1968) by James Tenney.
5. See B.Moore, The Psychology of Hearing (Academic Press, London, 1982),
p.50-52.
6. The sonograms in these pages were made using a NeXT computer at the
Centro di Calcolo, Padua University. All other graphics were realized on an IBM486
at the Laboratorio Musica & Sonologia in L'Aquila. At the time of the making of this
analysis (1994), the only recording of Concret PH at my disposal was that published
on Nonesuch record H-71246. The work is now also available on the Electronic Music
Foundation CD EMF 003 (together with Diamorphoses, Orient-Occident, Hibiki-Hana-
Ma and S709).
7. in this second type of texture are hidden some sweeping sounds - heard as
shortest chirps; see fig.8.
8. Fig.6 is drawn from B.Mandelbrot’s book, Fractals. Form, Chance and
Dimension (Freeman, San Francisco, 1977).
9. See I. Xenakis, Formalized Music (Pendragon Press, New York, 1992), p. 9.
The 1992 edition of Formalized Music is an an updated and augmented version of the
1971 English edition (Bloomington, 1971) which, in turn, integrally reprinted and
expanded the original French version, Musique Formelles (La Revue Musicale,
Paris, 1963).
10. S. Beer, “Below the Twilight Arch”, General systems, n. 5, 1969, p.20.
11. Formalized Music, p.103; Philips recording 835487.
12. D.Gabor, “Acoustical Quanta and the Theory of Hearing”, Nature, 4044(3),
1947, p.592.
13. for an easily readable computer code implementation of first-order TPMs,
see C.Dodge & T.Jerse, Computer Music. Synthesis, Composition, Performance
(Schirmer, New York, 1985); p.283-288.
14. The same strategy was utilized in Analogique A (for 3 violins, 3 cellos and 3
doublebasses). The compositional process of Analogique B closely resembles that of
Analogique A, the basic elements of which are very short notes; the distance
between successive screens is Δt = 1.111 (L ≈ 54 MM). Analogique A-B preposes
the neat contraposition of instrumental and electronic sounds; it makes overtly clear
that no attempt at timbral integration was made. Musicologist Angelo Orcalli
describes the sounds of Analogique B as "the buzzing of insects”, in his book
Fenomenologia della Musica Sperimentale (Sonus Edizioni, Potenza, 1993; p.117).
15. Formalized Music, p. 94.
16. He borrowed Abraham Moles’ as well. However, Gabor had anticipated
most of the relevant psychoacoustical notions discussed in Moles’ Théorie de
l'Information et Perception Esthétique (Flammarion, Paris, 1958).
17. B. Truax, “Real-time Granular Synthesis with a Digital Signal Processing
Computer”, Computer Music Journal, 12(2), 1988, p.14-26; C. Roads, “Automated
Granular Synthesis”, Computer Music Journal, 2(2), 1978, p.61-62; C. Roads,
“Asychronous Granular Synthesis”, in Representations of Musical Signals (De Poli,
G., PIccialli, A. and Roads, C. eds.), MIT Press, Cambridge Mass., 1991, p.143-186.
18. A. Di Scipio, “Composition by Exploration of Nonlinear Dynamical Systems”,
Proceedings of the International Computer Music Conference (Glasgow, 1990), p.324-
327; “Caos deterministico, composizione e sintesi del suono”, Atti del IX Colloquio di
Informatica Musicale (Genoa, 1991), p.337-350; "Micro-time Sonic Design and the
Formation of Timbre", Contemporary Music Review, 10(2), 1994, p.135-148.
19. A.Bregman, Auditory Scene Analysis, MIT Press, Cambridge Mass.,1990,
pp.118-119.
20. Formalized Music, p.103. For a more extended discussion on this point, see
my "The problem of 2nd-order sonorities in Xenakis' electroacoustic music",
Organised Sound, 2(3), 1997, p.165-178.
21. Orcalli, Fenomenologia..., p. 120.
22. K.H.Stockhausen, “Die Einehit der musikalischen Zeit” (1963). English
translation: "The Concept of Unity of Time in Electronic Music", in Perspectives on
Contemporary Music Theory (Boretz, B. & Cone, E. eds.), Norton, 1972, p.214-225.
For a discussion on microcomposition in the 50’s and 60’s, see Henri Pousseur's
book, Fragments Thèorique. I - Sur la musique experimentale (Université Libre de
Brussels,1970), in particular the section “De la microstructure absolu”.
23. CD Neuma 450-74.
24. In a later release of UPIC on the AT386 computer, these curves were called
arches; see G.Marino, J.M.Raczinsky and M.H.Serra, “The New UPIC System”,
Proceedings of the International Computer Music Conference (Glasgow, 1990),
p.249-252.
25. CD PNM28.
26. though Xenakis' orchestral scores are disseminated with glissandi, that is
not the case with his earlier tape works, with the exception of Diamorphoses.
27. See, for example, I.Xenakis, “Music Composition Treks”, Composers and
the Computer (C. Roads, ed.), Kaufmann, Los Altos, Ca., 1985, p.171-191.
28. on graphical methods of sound synthesis, see discussion in C.Roads,
Computer Music Tutorial, MIT Press, Cambridge, Mass., 1996, p.329-335.
29. Formalized Music, p.192.
30. a stereo mixdown of the 7 tracks is on CD Montaigne MO 782058.
31. see R.Toop's presentation of the CD MO 782058; see also Formalized
Music, p.293.
32. Formalized Music, p.246.
33. see code examples in Dodge and Jerse's book, p.266-278.
34. I.Xenakis, “More Thorough Stochastic Music”, Proceedings of the
International Computer Music Conference (Montreal, 1991), p.517-518; M.H. Serra,
“Stochastic Composition and Stochastic Timbre”, Perspectives of New Music, 1993,
p.236-255; P.Hoffman, "Implementing the Dynamic Stochastic Synthesis", offprint
from Les Cahiers Groupe de Recherche en Informatique Image et Instrumentation, 4,
Caen, 1996 (no page number).
35. Formalized Music, p. 134.
36. These sonograms have been realized analyzing a monophonic copy of the
tape so as to provide an image of the total spectral structure.
37. G.Grisey, “Tempus ex machina. A Composer’s Reflections on Musical
Time”, Contemporary Music Review, 2(1), 1987, p. 269.
38. H. Dufourt, Musique, pouvoir, écriture, Bourgois, Paris, 1991, p. 335.
39. A.Di Scipio, “Inseparable Models of Material and of Musical Design in
Electroacoustic and Computer Music", Journal of New Music Research, 24(1), 1995,
p.34-50.
40. H. Dufourt, Musique, pouvoir, écriture, p. 195.
41. E. Morin (ed.), Teorie dell’evento, Bompiani, Milan, 1972, p. 31 (translation
mine).
42. E. Morin (ed.), Teorie dell’evento, p. 297.
43. H. Von Foerster, “On Self-organizing Systems and Their Environment”, in
Self-organizing Systems (C. Yovits, ed.), Pergamon Press, New York, 1960, p.31-50.
___________________________________________________________________
Figure captions
Figure #1
Concret PH is entirely made of shortest noise bursts. Shown here is the waveform of
one such sound (approximately 0.1" in duration) and its spectrum.
_________________________________________________________
Figure #2
Sonographic representation of Concret PH
a) 00"-80"
b) 80"-160"
_________________________________________________________
Figure #3
Particulars of Concret PH
a) overlapping of two types of sound texture, fragment 40"-50"
b) overlapping of two types of sound texture, fragment 110"-120"
c) first type of sound texture (noise bursts of wide spectrum in evidence), fragment
30"-40"
d) second type of sound texture (noise bursts of narrower spectrum) in evidence,
fragment 80.9"-86.6"
e) second type of sound texture (noise bursts of narrower spectrum) in evidence,
fragment 100"-110"
_________________________________________________________
Figure #4
A small particular of Concret PH, 37.7"-38"
_________________________________________________________
Figure #5
a) Concret PH, fragment 90"-100"
b) a small particular, 94.5"-96"
c) a smaller particular, 94.8"-95.2"
_________________________________________________________
Figure #6
A fractal surface, of dimension 2.666667 (reprinted from B.Mandelbrot, Fractals.
Form, Chance and Dimension. San Francisco, 1977)
_________________________________________________________
Figure #7
Two particulars in the sonogram of Concret PH
a) the first type of texture is predominant at the beginning of the piece, as in fragment
15"-20"
b) the second type of texture is predominant towards the end, as in fragment 140"-
145"
_________________________________________________________
Figure #8
Concret PH, two shortest, barely audible chirping sounds (see note 7)
a) 79.4"
b) 130.7"
_________________________________________________________
Figure #9
The number of sinusoidal grains represented in the time/frequency grid (top), is the
same in all three examples, but their temporal pattern is different, thus causing the
sound to change spectrum (bottom).
_________________________________________________________
Figure #10
A Gaussian grain as described by Dennis Gabor in 1944, and a rectangular grain as
utilized by Xenakis in Analogique B
_________________________________________________________
Figure #11
A "book of screens", as utilized by Xenakis at the microcompositional level in
Analogique B. Represented are discrete values of frequency (Δf), amplitude (Δg) and
time (Δt). Described here are two ideally sinusoidal tones of equal amplitude, one at
fixed frequency, the other sweeping down in frequency.
_________________________________________________________
Figure #12
The score of Mycenae-Alpha (reprinted with permission of Salabert Ed.)
_________________________________________________________
Figure #13
An approximate graphical rendition of probability density functions (top left: Cauchy;
top right: exponential; bottom left: Gauss; bottom right: uniform).
_________________________________________________________
Figure #14
a) polygonal waveform generated with dynamic stochastic synthesis
b) polygonal waveform generated with dynamic stochastic synthesis; segment end
points have been calculated as stochastic variations of segment end points in the
previous waveform
_________________________________________________________
Figure #15
Two pages of the Gendy301 score. Thick lines indicate active time fields (activation
of the synthesis process). The only parameters represented here are start time and
duration for each of the 16 layers available.
_________________________________________________________
Figure #16
Sonogram of the beginning of Gendy301, 0"-30"
_________________________________________________________
Figure #17
Sonogram of a 7-second passage in Gendy301 (transition from section 3 to section
4).
_________________________________________________________
Figure #18
a) Sonogram of the entire section 11, 94" in duration.
b) a particular of section 11, 4" in duration
c) a particular of section 11, 0.5" in duration
_________________________________________________________
Biographical note
Agostino Di Scipio (Naples, 1962) is a composer and devotes much of his work to
electroacoustic and computer music research. For several years he has worked at
CSC, Univ. of Padua. In 1993 he was "visiting composer" at Simon Fraser University,
Burnaby B.C. (School for Contemporary Arts and Department of Communication,
with a grant from the Internation Council of Canadian Studies, Ottawa) and in 1995
worked at Sibelius Academy, Helsinki (with a grant from the Finnish Government's
FIMO program). His music has been performed in several countries, in Europe, Asia,
North and South America, and received honorary mentions and awards at the
Bourges competition (1991, 1996) and at Prix Ars Electronica, Linz (1995). In 1996
he won a CEMAT commission award (program of the Italian Ministery of Culture) to
compose his INSTALL QRTT, for string quartet and interactive computer music
installation. His compositions are on CDs by NoteWorks (Köln), Neuma (Acton,
Mass.) and MusicaImmagine (Rome). A teacher of Electroacoustic Music at
Conservatory of Bari, Di Scipio is the editor of the collective volume Teoria e prassi
della musica nell'era dell'informatica (G.Laterza, Bari, 1995) and of the italian
translation of G.M.Koenig's Genesi e Forma (Semar, Rome, 1995). His writings have
appeared in Contemporary Music Review, Journal of New Music Research,
Perspectives of New Music, Organised Sound and other journals. Of recent
publication are "Interpreting Music Technology. From Heidegger to 'Subversive'
Rationalization" (Sonus - A Journal of Investigations into Global Musical Possibilities,
18(1), 1997) and "Questions concerning music technology" (Angelaki - journal of the
theoretical humanities, 4(2), 1998).