ArticlePDF Available

Virtual musical instruments - Natural sound using physical models


Abstract and Figures

Physical modelling of musical instruments is an exciting new paradigm in digital sound synthesis. The basic idea is to imitate the sound production mechanism of an acoustic musical instrument using a computer program. The sound produced by such a model will automatically resemble that of the real instrument, if the model has been devised in a proper way. In this article we review the history and present techniques of physical modelling. It appears that the many seemingly very different modelling methods try to achieve the same result: to simulate the solutions of the wave equation in a simplified manner. We concentrate on the digital waveguide modelling technique which has gained much popularity among both researchers and engineers in the music technology industry. The benefits and drawbacks of the new technology are considered, and concurrent research topics are discussed. The physical modelling approach offers many new applications, especially in the fields of multimedia and virtual reality.
Content may be subject to copyright.
Virtual musical instruments
sound using physical models
Helsinki University of Technology, Laboratory of Acoustics and Audio Signal Processing, Otakaari 5A, FIN-02150 Espoo, Finland
Helsinki University of Technology, Department of Computer Science, Otakaari 1, FIN-02150 Espoo, Finland
Physical modelling of musical instruments is an exciting
thus the method that is used for producing the digital
new paradigm in digital sound synthesis. The basic idea is
sound signal.
to imitate the sound production mechanism of an acoustic
A fundamental difference between the physical
musical instrument using a computer program. The sound
modelling approach and other synthesis techniques is
produced by such a model will automatically resemble that
that the former tries to imitate the properties of the
of the real instrument, if the model has been devised in a
sound source (type of excitation and resonator,
proper way. In this article we review the history and present
resonances of a soundboard, etc.), while the latter
techniques of physical modelling. It appears that the many
focus on the properties of the sound signal heard by
seemingly very different modelling methods try to achieve
the listener (waveform, spectrum, etc.). It is astonish-
the same result: to simulate the solutions of the wave
ing that none of the earlier music synthesis techniques
equation in a simplified manner. We concentrate on the
digital waveguide modelling technique which has gained
really considered the sound source, whereas in the
much popularity among both researchers and engineers in
field of speech synthesis, the simulation of the sound
the music technology industry. The benefits and drawbacks
source (the vocal tract resonances and the vibration
of the new technology are considered, and concurrent
of the vocal cords) has been a popular technique from
research topics are discussed. The physical modelling
the beginning of the 1960s.
approach offers many new applications, especially in the
The main advantages of physical modelling syn-
fields of multimedia and virtual reality.
thesis are that the parameters of the technique are
physically meaningful, such as the blowing pressure
in wind instruments, and important parts of the1. INTRODUCTION
evolution of single tones (e.g. the attack and decay)
Physical modelling of musical instruments is a new are generated automatically in a correct way.
approach to sound synthesis using computers. It is The physical modelling approach is reaching a
based on the idea that when the vibrating structure is mature phase. The first commercial products based
simulated in exactly the right way, the sound pro- on a physical model have been released recently,
duced by that model is identical with the sound of including the Yamaha VL-1, Korg Wavedrum, and
the corresponding physical object. Thus, physical Roland VG-8 guitar processor. Of these, only the
modelling of musical instruments simply means that VL-1 is a purely physical model. The others use
the physical structure of a musical instrument is being sound processing algorithms that are motivated by
modelled with mathematical and physical formulae the principles of physical modelling.
and these formulae are realised with a computer.
While being run, this computer program generates 2. OVERVIEW OF DIGITAL SYNTHESIS
numbers (samples) that can be processed in a way TECHNIQUES
familiar to the computer music community: the
sequence of numbers (a digital signal) is fed into a Before discussing the details of physical modelling,
digital-to-analogue (DyA) converter, which changes we now review other sound synthesis techniques. This
every number to a corresponding electrical voltage will be beneficial in understanding some of the recent
level. This time-varying voltage can be amplified and modelling algorithms. Later on we will see that many
listened to through loudspeakers or headphones. All of the principles used in traditional sound synthesis
the phases of this synthesis process, except for the also play an essential role in physical models.
initial creation of the number sequence, are identical
to what has been done since the early days of com- 2.1. Traditional methods
puter music. The only difference between this new
digital synthesis technique and all the earlier ones is 2.1.1. Linear and nonlinear techniques
In earlier days, sound synthesis techniques were
* The authors are grateful to Professor Matti Karjalainen for his
comments on the manuscript and many fruitful discussions.
usually divided into two categories according to lin-
Organised Sound 1(2): 75–86 1996 Cambridge University Press
76 Vesa Va
¨ki and Tapio Takala
earity. An acoustic system is said to be linear if (i) include any other frequencies than those included in
the input signal. The filter not only attenuates somethe sum of outputs produced by two different input
signals is the same as the output signal produced of the frequency components of the input signal but
may also boost some of them. This can be used towhen the sum of these two signals has been used as
an input, and (ii) the amplification of the input signal imitate formants which are resonances that character-
ise the timbre of an instrument. In analog synthe-by some factor causes the output to be scaled by the
same factor. These are known as the principles of sizers this principle was extremely popular.
The nonlinear synthesis techniques are commonlysuperposition and homogeneity, respectively. All the
techniques that do not fulfil these principles are called called modulation techniques since they usually
employ multiplication of two signals. The FM syn-nonlinear. Some basic features of nonlinear systems
are that the output signal may contain other fre- thesis technique (Chowning 1973) is perhaps the best
known representative of nonlinear techniques. It isquencies than those present in the input and that the
spectral content of the output signal depends on the based on the principle of frequency modulation
where one oscillator (the modulator) controls theamplitude of the input signal.
The class of linear techniques includes additive frequency of another one (the carrier). The Yamaha
DX-7 released in 1983 was the first product to useand subtractive synthesis. Additive synthesis is one of
the oldest techniques for sound synthesis, yet only the FM principle and also the first fully digital syn-
thesizer. It represented a breakthrough for digitalrecently has it emerged as a tool that can be easily
used for practical work. Additive synthesis basically sound synthesis.
Another nonlinear synthesis technique is wave-means adding a number of sine waves to construct a
desired spectrum. The same principle has been used shaping (Arfib 1979, Le Brun 1979). In this method
the idea is to feed a signal (typically a sine wave) intoin electric organs. An advantageous feature of addi-
tive synthesis is the availability of a corresponding a function that maps the amplitude values in a non-
linear manner. An example of this principle is theanalysis technique, the Fourier transform, which takes
as an input a signal waveform and fits a number of clipping caused by overdriving a guitar amplifier.
Here the large amplitude values are compressed,sine and cosine waves into the data. As a result the
Fourier transform produces a set of numbers that tell while the small values are not much affected. Beau-
champ (1979) applied this method to produce brassus how much and in what phase each of the sine and
cosine components contribute to the input signal. instrument sounds from pure sine waves. Interest-
ingly enough, waveshaping is nowadays part of someThese data can be used as parameters for additive
synthesis. The parameters to be controlled are the physical models.
frequency, amplitude and phase of each sine wave.
The Fourier transform can analyse both harmonic 2.1.2. Wavetable synthesis
and nonharmonic signals. What is even better is that
there exists an efficient way to compute the Fourier Another very popular synthesis technique is sampling,
also called wavetable synthesis. It simply meanstransform using a computer – the fast Fourier trans-
form (FFT) algorithm. This makes the analysis stage recording, processing and playback of sounds. While
it can be argued that sampling is not a synthesis tech-much faster, but, even more importantly, additive
synthesis can be realised very efficiently by first defin- nique at all (since it does not create sound from
scratch) it must be said that this technique offers aning the spectrum of the signal and then converting
this spectrum into a time waveform using the inverse infinite variety of possibilities: any sound (acoustic or
synthetic) can be recorded digitally, filtered or editedFFT algorithm. Rodet and Depalle (1992) have
recently shown that it is possible to radically reduce or combined with other signals, and finally the pro-
cessed version can be listened to. The processing canthe amount of data in the spectral representation and
yet produce high-quality musical tones via the inverse be so violent that the original sampled sound may
not be recognisable, although it still affects the result.FFT.
Subtractive synthesis refers to linear filtering of an The reader should not be too surprised to learn that
this principle of processing recorded sounds is also ainput signal so that some frequency components are
attenuated (subtracted) and some emphasised. A bet- useful tool for physical modelling.
In practice samplers are distinguished from otherter term is sourcefilter model since the basic idea of
subtractive synthesis is that sound production is div- wavetable synthesizers in the way the original
samples are generated and played back. While inided into two parts: an excitation signal (source) and
a resonator (filter). The excitation is typically a spec- samplers the playable signal is recorded in its whole
length, in wavetable synthesizers the stored waveformtrally rich waveform, such as noise or some periodic
pulseform whose spectrum has energy at several fre- is repeated in order to have a periodic tone. The orig-
inal samples are synthesized with some other tech-quencies. This is helpful when we realise that the filter
is a linear system and the output signal cannot nique, such as additive or subtractive synthesis, or
Virtual musical instruments 77
even by drawing the waveform by hand using a the amplitude and frequency of every sine wave.
However, only recent advances by McAuley andgraphical interface. Unfortunately, the visual appear-
ance of the waveform has very little to do with the Quatieri (1986), Rodet and Depalle (1992), and Serra
and Smith (1991) have expanded this technique intoaudible timbre. The essential feature in periodic sig-
nals is their spectral content rather than the variation a useful one. As is often the case in the history of
computer music, an important technique was firstof the signal waveform over time. It is also known
that the ear is not sensitive to the phase of a signal. applied to processing speech rather than musical sig-
nals: McAuley and Quatieri (1986) showed that aThis implies that we can have signals with different
waveforms but with the same frequency spectrum and speech signal can be represented with a number of
sine waves whilst still retaining a high quality. Theyet have similar timbre.
amplitudes and frequencies must vary with time
according to trajectories measured from real signals.
2.2. Recent developments The MQ algorithm utilises the concepts of death and
birth in order to know when to drop one sine compo-The old categorisation of synthesis techniques
according to linearity is no longer particularly useful. nent and to replace it with another one of a different
frequency.There are several new approaches that are not essen-
tially linear or nonlinear but which have some other Serra and Smith (1991) expanded additive syn-
thesis into a sines-plus-noise model, where the har-property or principle that is of interest. Furthermore,
in some advanced algorithms linear and nonlinear monics and other prominent frequency peaks of a
sound are imitated by sine waves, and the rest of thetechniques appear as components.
Smith (1991) suggested another classification for sound (the residual) is modelled as noise. This model
helps considerably in music synthesis where soundssound synthesis techniques into four categories:
(i) abstract algorithms, (ii) processing of recorded can be very noisy. Resynthesis of a noisy musical tone
was one of the weakest points of traditional additivesamples, (iii) spectral models, and (iv) physical mod-
els. The first class includes techniques that produce synthesis, since a huge number of sine waves was
required to reproduce broadband noise. In the sines-sound based on some mathematical formula which
has no direct relation to real-world sounds or acous- plus-noise model, the noise component is created by
filtering white noise (a random signal with a flat spec-tic principles of sound generation. Thus it is difficult
to predict what kind of sound is produced by a par- trum) with a time-varying filter.
ticular algorithm, or to design an algorithm that
results in a particular sound. The traditional nonlin- 3. PRINCIPLES OF PHYSICAL MODELLING
ear techniques such as FM synthesis and waveshap-
ing are well-known representatives of this class. The The roots of physical modelling lie in the history of
mathematics and physics. Many of the principlessecond class, processing of recorded samples,
includes, for example, wavetable synthesis. used in physical models today were first discovered
in the eighteenth century by mathematicians such asSpectral and physical models have recently gained
more interest than the other approaches. For this D’Alembert, Euler, Bernoulli, and Lagrange. The
single most important development was the wavereason we discuss them in more detail.
equation, also known as Helmholtz’ equation. It
describes how waves in general, including mechanical
2.2.1. Spectral modelling vibrations, propagate in a homogeneous medium. In
this section we show how the equation is derivedSpectral models concentrate on the frequency-
domain properties of sound waves at the ear drum from a simple physical system, and survey methods
for solving the equation. As will be seen, the discreteof the listener, taking into account how sounds are
psychologically perceived. This class includes tra- formulation used in the derivation appears to be an
efficient solution algorithm, too.ditional linear techniques, such as basic additive and
subtractive synthesis, but recently several new
approaches have been developed. In particular, these 3.1. Wave transmission in a physical system
methods utilise the fact that interesting timbres are
not stationary but are characterized by the evolution To understand how vibrations behave, let us observe
a simple mechanical system consisting of a chain ofof their spectral components.
The major problem with the implementation of equal point masses connected with massless springs
(figure 1). At rest all the masses are equidistant.additive synthesis has been the huge numbers of con-
trol parameters (frequency, amplitude, and phase of Assume that the first mass is forced to move a small
amount to the right. The first spring is now com-each sine wave as a function of time). The first
attempts at reducing the quantity of data were to neg- pressed and tries to return to its rest length by
exerting a force at both ends proportinal to the con-lect the phase and to determine linear trajectories for
78 Vesa Va
¨ki and Tapio Takala
and compressions alternating in time. The same
phenomenon also happens at any other frequency
which is an integral multiple of the basic resonance.
3.2. The wave equation
Exact physical analysis of our example system
requires writing down its equations of motion. This
derivation is rather mathematical, so a non-interested
reader may want to skip to the solutions decribed in
the next section.
Referring to figure 2, let us use p
to denote the
displacement of the mass m
from its rest position
along the line. The spring to the left is compressed if
its left end is more displaced than its right end. It acts
Figure 1. Propagation of displacement in a chain of masses
on m
with a force F
) to the right
and springs.
where kis the spring stiffness constant which relates
the force exerted to the degree of compression of the
spring. Similarly, the spring to the right produces atraction of the spring. By Newton’s second law, this
force causes the second mass to accelerate to the force F
), but in the opposite
direction. Thus the total force on m
is F
Gright, and similarly the first mass to accelerate to the
left. After some time the first mass has returned to its k(p
). This can be writ-
ten in abbreviated form as F
, indicating thatrest position, but the second mass is now displaced.
Meanwhile, the second spring has been compressed we have differentiated twice (taking differences of dif-
ferences in p).and produces a force on and acceleration of the third
mass. In this way the initial contraction impulse By Newton’s second law, FGma, and so the
acceleration caused by the force on the mass is a
Ggradually propagates as a wave along the chain. The
propagation speed is determined by the material F
(as all masses are equal, we simply use
mfor m
). During a short time interval t, thisproperties: stiffer springs make the wave move faster,
whereas larger masses moderate its speed due to updates the velocity of the mass as v
t, which updates its position as p
What happens as the wave comes to the end of the p
t. Although these approximate formulae
can be used for simulation, a method first put for-chain depends on how the last mass is constrained. If
it is free to move, it will stretch the last spring (as in ward by Euler, the discrete time step causes errors
which have to be specially compensated for. Smallerfigure 1), which will cause a strain wave to propagate
to the left along the chain. Thus the original contrac- steps will increase accuracy, to a limit which results
in infinitesimal intervals and mathematically exacttion wave has been reflected at the end, and its phase
inverted. If the last mass were fixed, it would not integral formulae.
Now if we wish to approximate a continuousrelease the contraction of the last spring, which would
then reflect the compression wave back and retain its material we can scale down the system and use
smaller masses with denser spacing and shorteroriginal phase. A similar reflection occurs later at the
other end and a wave, once started, will move back springs in between. In this case it is better to use rela-
tive measures, i.e. displacements per unit length, andand forth along the chain.
If we move a mass in the middle of the chain, two to indicate position by a continuous variable sinstead
waves will emanate in opposite directions, each
reflecting at the end and returning towards the centre.
Due to the superposition principle of linear systems,
they will not disturb each other while meeting, but
simply sum temporarily and continue their motion.
Instead of a single impulse, an arbitrary waveform
can be generated by forcing a mass to move as a func-
tion of time. A sinusoidal force with period equal to
the two-way travelling time of the wave in the chain
will produce a standing wave: the two opposite travel-
ling waves sum such that they alternately reinforce
and cancel each other – seemingly there is no
Figure 2. Forces due to the springs acting on the ith point
mass in the chain.
wavefront moving along the chain, but only strains
Virtual musical instruments 79
of the integer index i. This means we have to scale functions, one moving to the left and the other to the
right along the chain. The initial state determines theeach difference by the spacing Sof masses along the
line, yielding F
yS, and similarly for F
. shape of these waves. At the boundaries the waves
reflect, retaining their phase if the end is fixed andIn total, the force acting on the mass is F
, resulting in an acceleration of a
ymGinverting if the end is free. They cause standing
waves, if their wavelength and the length of the chain(kS
. When the spacing Sbecomes infini-
tesimally small, we replace finite differences in pby are in integer proportions. Following the super-
position principle, all possible standing wave motionsdifferentials:
. Also we use the mass
Gmysinstead of infinitesimal masses, and can be separated into components called modes, each
of which has a specific frequency and specificthe stiffness of the material
GkS instead of infini-
tesimal springs. These change the first term as kS
ylocations of nodes. In typical musical instruments
with 1D resonators such as strings and tubes, themGkSy(myS)
. Further analysis reveals that
this constant factor equals the speed of motion of wavelengths are in almost integer proportions, so that
they form a harmonic series.the propagation squared, or
. Recalling that
acceleration is the second derivative of displacement, For 2 and 3D objects, analytic solutions can be
found for some simple geometries like rectangularwe arrive at the equation
plates, bricks and rooms, circular membranes, or
cylindrical rods. For rectangular objects the modal
.frequencies form separate harmonic series based on
each of the dimensions. In other cases they are gener-
Although we started the discussion with longitudi- ally inharmonic, as can be heard in the timbre of bells
nal displacement along a one-dimensional (1D) and drums, for example.
string, the same result holds for transverse displace- Finite element methods (FEMs) can be used to ana-
ment as well, except that the speed constant is based lyse the vibration of objects or rooms with irregular
on different stiffness measures. Alternatively, we can shapes or inhomogeneous material. The basic idea is
replace masses with velocity potential and springs to subdivide the object into simple homogeneous
with pressure, arriving at a similar equation for the elements (for example, instead of point masses and
motion of air in a tube. Our 1D equation can also be massless springs we would distribute the mass evenly
generalised to a higher-dimensional material by sim- along the springs). Within each element an analytic
ply taking the spatial derivative with respect to each solution can be formulated, but the boundary con-
coordinate axis and summing together. For three ditions tie mating elements together. Thus a set of
dimensions with coordinates x,yand z, the wave simultaneous equations is formed, which can be
equation is solved numerically for all elements at once. For
irregularly shaped but homogeneous material, it is
,also possible to use boundary element methods
(BEMs) with somewhat different integral equations
or more concisely, that are solved only at the boundary, but determine
the inside behaviour of motion also. Both FEMs and
), BEMs have been used in acoustics to find the modal
where each subscript means differentiation with frequencies of rooms and objects, but they are gener-
respect to the corresponding variable. ally too slow to be used directly for sound signal
generation. However, the analysed modes can be used
as a physical basis for spectral synthesis.
3.3. Solutions of the wave equation Approximative methods that simulate the behav-
iour of a system consisting of discrete mechanicalIf we wish to calculate the vibration of an object, we
have to solve the quantity pas a function of time and components similar to our spring example, have been
used both in computer music to generate timbres ofposition from this differential equation. For this we
need to know the initial state of the system (for very complicated systems (Cadoz, Luciani and
Florens 1984) and in computer animation to visualiseexample, displacement and velocity at each point),
boundary conditions (i.e. what happens at the end soft materials (Terzopoulos, Platt, Barr and Fleischer
1987). The system may consist of any configurationpoints of the material– usually if they are free or
fixed) and possibly the acting external forces (exci- of basic mass and spring components. The simulation
starts from an initial state and moves on step-by-steptation) as a function of time. We also need an algor-
ithm to compute the solution. in discrete time intervals (Euler’s method). At each
step, the forces acting on each mass are firstIn some simple cases an analytical solution can
be found. In 1D systems like our example above, it calculated, the corresponding accelerations used to
update velocities, and finally each position is updatedcan be shown that the solution is the sum of two wave
80 Vesa Va
¨ki and Tapio Takala
by velocity multiplied by the time step. This method should first focus on the simulation of natural
timbres. His main argument was that this approachof finite differences is fairly simple to program, and
with the evolution of faster computers it may gradu- would give knowledge of the fine details that discrimi-
nate synthetic sounds from natural become feasible even as a real-time synthesis
method. In the case of physical modelling, the initial con-
centration on natural sounds can be stressed evenOne way to make computation of discrete simu-
lations more efficient is to make the system com- more strongly, and usually the research is focused on
the sounds of acoustic instruments. A fascinatingpletely regular with a rectangular array of evenly
spaced masses connected by equal springs. In particu- theme is, of course, synthesis of sounds produced by
new imagined instruments. A physical modellinglar, if we choose the time step such that the signal
propagates exactly one mesh unit in the time step, the synthesizer provides the user with natural ways of
creating sounds and thus many of the strange, out-equations for each node reduce to simple summation
of the signal values at neighbouring nodes, and of-this-world sounds generated with a physical model
also have certain acoustical properties that makeappear to be mathematically exact. Moreover, a 1D
string becomes a waveguide where two signals travel them sound natural.
One example of this is the brightness of a tone:undistorted in opposite directions corresponding to
the analytic solution. This can therefore be many musical instrument sounds (e.g. those of string
and wind instruments) as well as the human voiceimplemented very efficiently as two separate trans-
mission lines (see figure 4), rather than computing the have a typical low-pass characteristic which is caused
by the fact that losses (caused by friction, dissipationforces and velocities at each point of a single line.
The use of waveguides as a physical model for sound of heat, etc.) are stronger at high frequencies. A
sound produced by an abstract digital technique, onsynthesis is discussed thoroughly later in this paper.
The wave equation as such is somewhat idealised the other hand, can in principle have a spectrum that
gains more energy towards high frequencies, at leastbecause it does not take into account the viscosity of
the material that causes dissipation of energy and up to a certain point. These kinds of spectra are
neither what we are used to hearing nor what ourthus damping of the motion. In our mechanical
model we can easily add another term to represent auditory system has been built for, and they are
therefore regarded as unnatural or unpleasant. Whenthese phenomena: for each mass there is a force
resisting its motion relative to its neighbours, with the parameters controlling the amount of losses in a
physical model are not changed radically from theirmagnitude proportional to the relative velocity and
direction opposite to the velocity. The effect is that ‘correct’ values, the resulting sounds will have a fami-
liar low-pass characteristic. Other features of naturalvibrations of higher frequency, due to their faster
motion, tend to get damped and die out sooner. This sounds can be retained in a similar fashion when
modifying the model. Physical modelling can offer afact can readily be applied when using results of
modal analysis in additive synthesis: each sinusoidal huge range of possibilities in the world of nonexisting
sounds that have a familiar ‘acoustic’ identity.component has a different decay rate that is pro-
portional to the frequency. In a discrete simulation In physical modelling, a musical instrument is typi-
cally divided into parts with respect to functionalthe damping can readily be applied as additional
forces when calculating the motion of each point properties. Commonly the model consists of three
parts: (i) the excitation mechanism, (ii) the resonator,mass. In waveguides, however, we would lose
efficiency if we computed the attenuation of the sig- and (iii) the radiator. In many instruments the exci-
tation and resonator are connected with a nonlinearnal at each point. Instead, all losses are collected to
one filter placed at the end of the waveguide. feedback. Each of the three parts is modelled separ-
ately. This provides the flexibility to combine parts
freely in a modular way and to build models of differ-
4. A SHORT HISTORY OF PHYSICAL ent instruments.
MODELLING An important step in the development of a synthe-
sizer based on a physical model is to reduce the modelBy now it is generally known that while digital
synthesis can in principle create any sound, a large as much as possible in order to be able to compute
the output signals in real time with an affordablemajority of these possibilities are musically uninter-
esting. Thus, a better strategy for the search for new computer system. In this reduction one has to take
into account the target of the produced soundtheinteresting timbres must be to find a technique that is
capable of recreating at least some useful sounds, and human ear. Although the basic idea of physical mod-
elling is to simulate the details of some sound pro-then generalising these timbres to the unknown
timbre spaces with careful modifications. Chowning duction mechanism, it is of no value to concentrate
on details which are not relevant to the timbre. The(1973) suggested in his FM synthesis article that an
investigation into the possibilities of his technique rule of thumb is that anything that will not have an
Virtual musical instruments 81
audible effect on the sound signal can be disregarded. built which enabled real-time sound synthesis based
on physical modelling to be realised for the first timeIn general, physical models are computationally
more expensive to implement than popular synthesis (Cadoz et al. 1984).
At IRCAM in Paris a third technique for physicalmethods such as FM or sampling synthesis. Fortu-
nately, computers and signal processors are becoming modelling was developed (Adrien 1991). It is called
modal synthesis and is based on a representation of afaster so that real-time implementation of more com-
plicated models will become feasible. vibrating structure as a collection of frequencies and
damping coefficients of resonance modes and coordi-
nates that describe the mode shapes. When the instru-
4.1. Physical modelling techniques ment is excited at some point, this force excites some
or all of the modes. An advantage of modal synthesisThe methods used for physical modelling can be
divided into five categories: (i) source–filter model- is that analysis tools exist which are not too laborious
to use. Adrien (1991) points out that modal analysisling, (ii) numerical solution of partial differential
equations, (iii) vibrating mass–spring networks, (iv) of a new object, say a violin body, only takes a couple
of days. For some simpler structures, such as a string,modal synthesis, and (v) waveguide synthesis. The
source–filter method is included here because it can the model data can be computed in an analytical some cases be interpreted as a physical modelling
technique. An example is a vocal tract model that is In modal synthesis, one of problems to be solved
is where to take the output of the model. At IRCAMused for synthesising speech or singing: the periodic
pulses generated by the vocal cords are the source a clever solution was found using the body of a real
instrument as the ‘loudspeaker’: in the case of theand the vocal tract is the filter (see, for example,
Rodet 1984). In some musical instruments it is also violin, for example, this means that the simulated
vibration of strings is fed to electrical shakers that areeasy to separate the source and the filter. In many
percussion instruments, for example, the excitation is attached to a violin body, the strings of which have
been carefully damped (Adrien 1991). This approacha short impulse that is not affected by the feedback
from the resonator. However, most musical instru- has the very nice advantage that the radiational
properties of the instrument need not be simulated.ments are more complicated systems than just a com-
bination of two subsystems. In wind instruments, for In virtual reality applications, however, this practical
trick is not amenable, since all the vibrating struc-instance, feedback from the resonator to the excitor
is required, and in addition the interaction of the exci- tures must create numerical data to be used by other
parts of the virtual reality system.tation mechanism and the feedback signal is
The first attempts to use a physical model for 4.2. Waveguide synthesis
generating musical sounds were made by Hiller and
Ruiz (1971). They started with the differential equa- The waveguide synthesis technique will be discussed
in more detail than the other techniques because intions that govern the vibrations of a string and
approximated this equation with finite differences. the first half of this decade it has turned out to be the
most important of all the physical modellingThis technique is computationally very intensive.
Even today real-time sound synthesis with this methods. This applies to both the academic and com-
mercial communities: the majority of recent advancesapproach using affordable hardware is out of the
question. This line of physical modelling has been in the theory of physical modelling have incorporated
digital waveguide techniques, and all the commercialcontinued by Chaigne, Askenfelt and Jansson (1990)
(see also Chaigne 1992). products utilise digital waveguides.
The aim in digital waveguide modelling is toLater in the 1970s another approach to physical
modelling was taken in Grenoble, France. A system design computationally efficient models that behave
like physical systems. This technique is especially wellcalled CORDIS was developed which simulates a
musical instrument as a collection of point masses suited to the simulation of 1D resonators, such as a
vibrating string, a narrow acoustic tube, or a thin bar.that have certain elasticity and frictional character-
istics (Cadoz et al. 1984, Florens and Cadoz 1991). The method was, however, first applied to artificial
reverberation using delay line networks (Smith 1985)This approach is closely related to the so-called finite
element method (FEM) that is used in mechanical and only after that to the synthesis of wind and string
instruments (Smith 1986). The theory of digital wave-engineering to simulate vibration of structures. The
object is divided into a large number of pieces in guide modelling has been primarily developed by
Smith (Smith 1987, 1992, 1993). Discrete-time sys-space. Each piece is connected to its neighbours with
springs and microdampers. These elements form net- tems which implement a waveguide model are called
waveguide filters (WGFs). A particularly nice featureworks that imitate the vibrating object. At the begin-
ning of the 1980s a special-purpose processor was of the digital waveguide approach is that it simulates
82 Vesa Va
¨ki and Tapio Takala
physical phenomena directly in a digital way, that is, causes the high frequency components to decay more
rapidly than the lower frequencies corresponds to athere is no need to first develop a continuous-time
model and then discretise this in time. physically meaningful model. Dispersion can be
brought about using a filter with a nonlinear phaseFirst we discuss a well-known forerunner of wave-
guide models, the Karplus–Strong algorithm. response.
One of the major problems of the basic Karplus–
Strong model is that the fundamental frequency of
4.2.1. The Karplus–Strong model the tone cannot be accurately controlled. The funda-
mental frequency f
of the synthetic signal is deter-Kevin Karplus and Alex Strong developed a simple
algorithm for the synthesis of plucked string sounds mined by the length of the delay line, i.e.
(Karplus and Strong 1983). Their method is based on
the idea that a wavetable, i.e. a table containing a f
sampled waveform of an audio signal, is modified
while it is read. The technique was found to be a spe- where f
is the sampling rate (in Hz), Mis the length
cial case of a string simulation studied by McIntyre, of the delay line in samples, and 1y2 refers to the
Woodhouse and Schumacher (1983) and it was delay caused by the two-point averaging filter. It is
instantly extended by Jaffe and Smith (1983). The seen that this equation is a function of the integer-
Karplus–Strong algorithm is an important prede- valued variable M. Thus the fundamental frequency
cessor of current waveguide models and recently it is quantised and an arbitrary pitch cannot be
has led to quite detailed models of string instruments produced.
(see, for example, Karjalainen and Va
¨ki 1993, The length of the delay line loop, and consequently
Karjalainen, Va
¨ki and Ja
´nosy 1993, Smith 1993, the fundamental frequency f
of the synthetic signal,
¨ki, Huopaniemi, Karjalainen and Ja
´nosy can be accurately controlled by adding a fractional
1995). delay filter in the feedback loop of the Karplus–
The block diagram of the Karplus–Strong model Strong model. Jaffe and Smith (1983) introduced a
is shown in figure 3. The system is a recursive comb first-order allpass filter for producing the desired frac-
filter. The delay line (or wavetable) is initialised with tional delay. Later, a linear interpolator (Sullivan
white noise. The output of the delay line is fed to a 1990) and a third-order Lagrange interpolator
low-pass filter that is called the loop filter. The fil- (Karjalainen and Laine 1991) were used for this task.
tering result is the output of the system and it is also
fed back to the delay line. 4.2.2. Advances in waveguide synthesis
This technique does not produce a purely repeti-
tive output signal as basic wavetable synthesis does. One of the first applications of the waveguide model-
In particular, those frequency components of the ran- ling technique was model-based sound synthesis
dom signal which coincide with the resonances of the of the clarinet (Smith 1986, Va
¨ki, Laakso,
comb filter will attenuate more slowly than the other Karjalainen and Laine 1992, Rocchesso and Turra
components. Thus, the signal in the delay line pro- 1993). A waveguide flute model was introduced in
gressively turns into a pseudo-periodic signal with a Karjalainen, Laine, Laakso and Va
¨ki (1991) and
clearly perceivable fundamental frequency. Va
¨ki, Karjalainen, Ja
´nosy and Laine (1992). In a
The original proposition of Karplus and Strong wind instrument model, the waveguide consists of
was that a two-point averaging filter should be used two delay lines which represent the propagation of
as the loop filter, but in practice a more sophisticated wave components in opposite directions in an acous-
digital filter is employed. It is important to remember tic tube (figure 4). The length Lof the delay lines is
that the filter’s magnitude response must not exceed related to the effective length l
of the tube by
unity at any frequency – otherwise the system
becomes unstable. When the magnitude response of LGf
the filter is less than unity at all frequencies the
resulting sound will gradually attenuate. The timbre where f
is the sampling rate (e.g. 22.05 kHz) and cis
will also vary over time if the magnitude response is the speed of sound (c340 m s
). Note that in general
not flat, since the harmonics of the signal will attenu-
ate at different rates. A digital low-pass filter that
Figure 4. Waveguide wind instrument model.Figure 3. The Karplus–Strong model.
Virtual musical instruments 83
Lis not an integer and again a fractional delay filter and also by composers and arrangers of acoustic
music. For them physical models will offer themust be used to implement the real-valued delay line.
The reflection model consists of a digital filter that possibility of listening to a musical piece in progress
before musicians start rehearsing it. Even now manybrings about the frequency-dependent reflection and
radiation of the sound wave at the end of the bore. composers use a MIDI synthesizer to listen to their
pieces before introducing them to musicians or to theNote that the reflection model has two outputs, one
for the outgoing signal and the other for the reflected, public. With the more realistic synthetic sounds of
physical models, this electronic premiere can soundingoing signal.
The excitation model includes a nonlinearity which almost as natural as if it were played using acoustic an essential part of the sound production mechan-
ism in wind instruments. In the case of the clarinet, We may imagine that in the future when physical
modelling has reached high accuracy in terms ofthis system models the operation of a reed that con-
trols the air flow through the mouthpiece. The same simulating structures, it will be possible to simulate
virtual musical instruments which are physically feas-nonlinearity is also suitable for other reed wind
instruments, such as the saxophone (Cook 1988). In ible, but which do not exist in physical reality. This
could be useful for instrument builders when tryingthe flute, the nonlinearity models the airflow into the
bore and is similar to a sigmoid function (Karjalainen to improve or modify current acoustic musical instru-
ments. Presently, instrument builders typically haveet al. 1991, Va
¨ki, Karjalainen, Ja
´nosy and Laine
1992). to rebuild a prototype several times, since there is no
other way of knowing whether a modification willCook (1991, 1992) introduced a waveguide model
that is capable of synthesising brass instrument tones. yield a positive result. Even today, computers can aid
the design of certain parts of instruments, e.g. theHis system is also based on the principle shown in
figure 4. However, the nonlinearity was obtained by finger holes of wind instruments (Keefe 1982), but
complete simulation and sound synthesis is yet tomodelling the lips of the player as a mass–spring
oscillator. For discrete-time simulation the differen- come.
Another exciting possibility is the restoration oftial equations governing this oscillator were replaced
with difference equations. historical musical instruments. When the analysis and
parameter estimation tools for physical models haveVa
¨ki, Karjalainen and Laakso (1993) have
shown how finger holes may be incorporated in a advanced to the point that a physical model can be
made to simulate a recorded tone, it will be possiblewaveguide woodwind instrument model. Recently,
the digital waveguide technique has been generalised to imitate any instrument whose sound has been
recorded. For example, the sound of antique violinsto two dimensions (Van Duyne and Smith 1993). This
has allowed modelling of vibrating plates and, for that do not exist anymore could be recreated syntheti-
example, drum heads. Further generalisation to three cally and be used for playing more beautiful music.
and more dimensions is straightforward. Savioja, Furthermore, old recordings that are partly destroyed
Rinne and Takala (1994) have demonstrated that a could be restored by replacing the missing parts using
3D waveguide mesh can be used to simulate room a physical model of the old instrument. Of course,
acoustics. this might raise a discussion on the ethical suitability
Nowadays research waveguide modelling mainly of such a procedure.
concentrates on the nonlinear interaction of Researchers in computer music and physical
excitation and the vibrating objects. Other concurrent modelling have begun to picture a new way of
research topics include new techniques for modelling recording music. Instead of recording the music with
resonators, such as waveguide modelling of conical a microphone and storing the sound samples on a
acoustic tubes (Va
¨ki and Karjalainen 1994), CD record we could proceed as follows. Record the
modelling the radiation characteristics of musical musical performance digitally and analyse it with a
instruments (Huopaniemi, Karjalainen, Va
¨ki and computer program that would perform two tasks:
Huotilainen 1994, Karjalainen, Huopaniemi and Va
¨l- (1) Estimate the parameter values for physical mod-
¨ki 1995), and control of physical models (Ja
´nosy, els of those instruments that are present in the
Karjalainen and Va
¨ki 1994). performance. Since separation of sound signals
from a recording is an extremely difficult task, it
5. USAGE OF VIRTUAL MUSICAL might be necessary to record some isolated tones
INSTRUMENTS from each instrument for the purpose of the
analysis.In the future, there will be more commercial synthe-
(2) Follow the music signal through time and esti-sizers based on physical modelling. They will be more
mate the time instants of changes in the controladvanced synthesizers than those in use today. These
will be welcomed by computer music professionals parameters. These changes would correspond to
84 Vesa Va
¨ki and Tapio Takala
change of notes, change of playing style, or some stantial simplifications an interactive system can be
built, where the user can hear the effects of hisyherother phenomenon that would be caused by the
musician. actions on the virtual objects, feeling also the direc-
tion and distance of the sound event (Astheimer
The collection of parameters of the physical model 1993). Not so far in the future we may expect to
and the control parameters would include all the experience a virtual concert hall where virtual
information necessary to reconstruct the musical per- musicians play virtual instruments, either by them-
formance. This technique could be called model-based selves or led by a human conductor.
coding of music. There is no doubt that the amount
of information to be stored would be far smaller than REFERENCES
that used today for high-quality sound recording,
even when sophisticated psychoacoustic coding tech-
Adrien, J. M. 1991. The missing link: modal synthesis. In
niques are used. The sound quality of a physical
G. De Poli, A. Piccialli and C. Roads (eds.) Represen-
model can be characterised with a small set of num-
tations of Musical Signals, pp. 267–97. Cambridge, MA:
bers (just a few dozen) and control events occur very
MIT Press.
Arfib, D. 1979. Digital synthesis of complex spectra by
seldom (a few events per second) when compared to
means of multiplication of nonlinear distorted sine
sampling events in digital recording (44,100 samples
waves. Journal of the Audio Engineering Society 27(10):
per second). The technique may sound unrealistic
today, but once again it is worthwhile considering
Astheimer, P. 1993. What you see is what you hear – acous-
speech processing technology: in digital mobile
tics applied in virtual worlds. Proc.IEEE Symp.on
phones the voice of the speaker is analysed, coded
Research Frontiers in Virtual Reality, pp. 100–107. San
and decoded (resynthesised) with a technique that has
Jose, CA.
its roots in physical modelling of the speech pro-
Beauchamp, J. 1979. Brass-tone synthesis by spectrum
cessing mechanism. Thus, model-based coding of
evolution matching with nonlinear functions. Computer
music – perhaps first applied to performances by a
Music Journal 3(2): 35–43. Reprinted in Roads, C., and
single musical instrument – may not be as far in the
Strawn, J. (eds.). 1985. Foundations of Computer Music,
pp. 95–113. Cambridge, MA: MIT Press.
future as one might at first think.
Begault, D. R., 1994. 3-D Sound for Multimedia and Virtual
Virtual reality is also an exciting field where struc-
Reality. Cambridge, MA: Academic Press.
ture and behaviour of real or imaginary worlds are
Cadoz, C., Luciani, A., and Florens, J. 1984. Responsive
modelled and animated. With special devices like
input devices and sound synthesis by simulation of
head-mounted displays, a human observer can be
instrumental mechanisms: the CORDIS system.
immersed in a virtual environment so deeply that he/
Computer Music Journal 8(3): 60–73. Reprinted in
she feels like really being inside the simulation. Most
Roads, C. (ed.). 1989. The Music Machine, pp. 495–508.
of the research so far has been in developing 3D inter-
Cambridge, MA: MIT Press.
action devices and highly believable visual effects.
Chaigne, A. 1992. On the use of finite difference for musical
Recently, however, sounds in virtual worlds have also
synthesis – Application to plucked string instruments.
Journal dAcoustique 5: 181–211.
gained interest (Begault 1994). As the animation of
Chaigne, A., Askenfelt, A., and Jansson, E. V. 1990.
virtual objects is often physically based in order to
Temporal synthesis of string instrument tones. Speech
achieve the most lifelike motion, it is natural to use
Transmission Laboratory Quarterly Progress and
similar principles for sound generation as well.
Status Report (STL-QPSR)4: 81–100. Stockholm,
Striking similarities can be seen in the methods used:
Sweden: Royal Institute of Technology (KTH).
Pentland and Williams (1989) utilised modal analysis
Chowning, J. M. 1973. The synthesis of complex audio
to represent visible deformations and vibrations of
spectra by means of frequency modulation. Journal of
colliding objects, and Terzopoulos et al. (1987)
the Audio Engineering Society 21(7): 526–34. Reprinted
applied the masses-and-springs model for the same
in Roads, C., and Strawn, J. (eds.). 1985. Foundations of
purpose. Gaver (1994) produced sound effects for
Computer Music, pp. 6–29. Cambridge, MA: MIT Press.
Cook, P. R. 1988. Implementation of single reed instru-
user interfaces (so-called auditory icons) based on
ments with arbitrary bore shapes using digital wave-
simulated collisions when an object breaks into
guide filters. Technical Report No. STAN-M-50. Dept of
pieces, for example. Concurrently, the synchronis-
Music, CCRMA, Stanford University, Stanford, CA.
ation of animation and sound effects is an area of
Cook, P. R. 1991. TBone: an interactive waveguide brass
active research. Takala and Hahn (1992) have out-
instruments synthesis workbench for the NeXT
lined a pipelined process architecture for rendering
machine. Proc. 1991 Int. Computer Music Conf.
sounds in an analogous fashion to the way photore-
(ICMC’91), pp. 297–9. Montreal, Canada.
alistic images are rendered from 3D geometric mod-
Cook, P. R. 1992. A meta-wind-instrument physical model,
els. Although high-quality rendering of sound and
and a meta-controller for real-time performance control.
images cannot be performed in real time, it is still
Proc. 1992 Int. Computer Music Conf. (ICMC’92),
pp. 273–6. San Jose, California.
useful in the production of animated films. With sub-
Virtual musical instruments 85
Florens, J.-L., and Cadoz, C. 1991. The physical model: Trans.on Acoustics,Speech and Signal Processing 34:
744–54.modeling and simulating the instrumental universe. In
G. De Poli, A. Piccialli and C. Roads (eds.) Represen- McIntyre, M. E., Schumacher, R. T., and Woodhouse, J.
1983. On the oscillations of musical instruments. Journaltations of Musical Signals, pp. 227–68. Cambridge, MA:
MIT Press. of the Acoustical Society of America 74(5): 1,325–45.
Pentland, A., and Williams, J. 1989. Good vibrations:Gaver, W. W. 1994. Using and creating auditory icons. In
G. Kramer (ed.) Auditory Displays. SFI Studies in modal dynamics for graphics and animation. Proc.
SIGGRAPH’89. Computer Graphics 23(3): 215–22.Sciences of Complexity, Proc. Vol. XVIII, pp. 417–46.
Reading, MA: Addison-Wesley. Rocchesso, D., and Turra, F. 1993. A real time clarinet
model on Mars workstation. Proc.X Colloquio diHiller, L., and Ruiz, P. 1971. Synthesizing musical sounds
by solving the wave equation for vibrating objects: parts Informatica Musicale, pp. 210–13. Milan, Italy.
Rodet, X. 1984. Time domain formant-wave-functionI and II. Journal of the Audio Engineering Society 19(6):
462–70 & 19(7): 542–50. synthesis. Computer Music Journal 8(3): 15–31.
Rodet, X., and Depall, P. 1992. A new additive synthesisHuopaniemi, J., Karjalainen, M., Va
¨ki, V., and
Huotilainen, T. 1994. Virtual instruments in virtual method using inverse Fourier transform and spectral
envelopes. Proc. 1992 Int. Computer Music Conf.rooms – a real-time binaural room simulation environ-
ment for physical models of musical instruments. Proc. (ICMC’92), pp. 410–11. San Jose, California.
Savioja, L., Rinne, T. J., and Takala, T. 1994. Simulation1994 Int. Computer Music Conf., pp. 455–62. Aarhus,
Denmark. of room acoustics with a 3-D finite difference mesh.
Proc. 1994 Int. Computer Music Conf. (ICMC’94),Jaffe, D., and Smith, J. O. 1983. Extensions of the
Karplus–Strong plucked string algorithm. Computer pp. 463–6. Aarhus, Denmark.
Serra, X. J., and Smith, J. O. 1991. Spectral modelingMusic Journal 7(2): 56–69. Reprinted in Roads, C. (ed.).
1989. The Music Machine, pp. 481–94. Cambridge, MA: synthesis: a sound analysisysynthesis system based on a
deterministic plus stochastic decomposition. ComputerMIT Press.
´nosy, Z., Karjalainen, M., and Va
¨ki, V. 1994. Intelli- Music Journal 14(4): 12–24.
Smith, J. O. 1985. A new approach to digital reverberationgent synthesis control with applications to a physical
model of the acoustic guitar. Proc.1994 Int. Computer using closed waveguide networks. Proc. 1985 Int.
Computer Music Conf. (ICMC’85), pp. 47–53.Music Conf., pp. 402–6. Aarhus, Denmark.
Karjalainen, M., Huopaniemi, J., and Va
¨ki, V. 1995. Vancouver, Canada.
Smith, J. O. 1986. Efficient simulation of the reed-bore andDirection-dependent physical modeling of musical
instruments. Proc. 15th Int. Congr. on Acoustics bow-string mechanism. Proc. 1986 Int. Computer Music
Conf. (ICMC’86), pp. 275–80. The Hague, The(ICA’95), Vol. 3, pp. 451–4. Trondheim, Norway.
Karjalainen, M., and Laine, U. K. 1991. A model for real- Netherlands.
Smith, J. O. 1987. Music applications of digital waveguides.time sound synthesis of guitar on a floating-point signal
processor. Proc.IEEE Int.Conf.on Acoustics,Speech,Technical Report no.STAN-M-39. Dept of Music,
CCRMA, Stanford University, Stanford, CA.and Signal Processing (ICASSP’91), pp. 3,653–6.
Toronto, Canada. Smith, J. O. 1991. Viewpoints on the history of digital
synthesis. Proc. 1991 Int. Computer Music Conf.,Karjalainen, M., Laine, U. K., Laakso, T. I., and Va
V. 1991. Transmission-line modelling and real-time syn- pp. 1–10. Montreal, Canada.
Smith, J. O. 1992. Physical modeling using digital wave-thesis of string and wind instruments. Proc. 1991 Int.
Computer Music Conf. (ICMC’91), pp. 293–6. guides. Computer Music Journal 16(4): 74–87.
Smith, J. O. 1993. Efficient synthesis of stringed musicalMontreal, Canada.
Karjalainen, M., and Va
¨ki, V. 1993. Model-based instruments. Proc. 1993 Int. Computer Music Conf.
(ICMC’93), pp. 64–71. Tokyo, Japan.analysisysynthesis of the acoustic guitar. Proc.
Stockholm Music Acoustics Conf.(SMAC93), pp. 443– Sullivan, C. S. 1990. Extending the Karplus–Strong
algorithm to synthesize electric guitar timbres with dis-7. Stockholm, Sweden.
Karjalainen, M., Va
¨ki, V., and Ja
´nosy, Z. 1993. tortion and feedback. Computer Music Journal 14(3):
26–37.Towards high-quality sound synthesis of the guitar and
string instruments. Proc.Int.Computer Music Conf. Takala, T., and Hahn, J. 1992. Sound Rendering. Proc.
SIGGRAPH’92. Computer Graphics 26(2): 211–20.(ICMC’93), pp. 56–63. Tokyo, Japan.
Karplus, K., and Strong, A. 1983. Digital synthesis of Terzopoulos, D., Platt, J., Barr, A., and Fleischer, K. 1987.
Elastically deformable models. Proc. SIGGRAPH’87.plucked string and drum timbres. Computer Music
Journal 7(2): 43–55. Reprinted in Roads, C. (ed.). 1989. Computer Graphics 21(3): 205–14.
Van Duyne, S. A., and Smith, J. O. 1993. Physical model-The Music Machine. Cambridge, MA: MIT Press.
Keefe, D. H. 1982. Theory of the single woodwind tone ing with the 2-D digital waveguide mesh. Proc. 1993 Int.
Computer Music Conf. (ICMC’93), pp. 40–47. Tokyo,hole. Journal of the Acoustical Society of America 72(3):
676–87. Japan.
¨ki, V., and Karjalainen, M. 1994. Digital waveguideLe Brun, M. 1979. Digital waveshaping synthesis. Journal
of the Audio Engineering Society 27(4): 250–66. modeling of wind instruments bores constructed of
truncated cones. Proc. 1994 Int. Computer Music Conf.McAuley, R. J., and Quatieri, T. F. 1986. Speech analysisy
synthesis based on sinusoidal representation. IEEE (ICMC’94), pp. 423–30. Aarhus, Denmark.
86 Vesa Va
¨ki and Tapio Takala
¨ki, V., Huopaniemi, J., Karjalainen, M., and Va
¨ki, V., Karjalainen, M., and Laakso, T. I. 1993.
´nosy, Z. 1995. Physical modeling of plucked string Modeling of woodwind bores with finger holes. Proc.
instruments with application to real-time sound Int.Computer Music Conf.(ICMC’93), pp. 32–9. Tokyo,
synthesis. AES 98th Convention. Paris, France. A revised Japan.
version will be published in the Journal of the Audio Va
¨ki, V., Laakso, T. I., Karjalainen, M., and
Engineering Society.Laine, U. K. 1992. A new computational model for the
¨ki, V., Karjalainen, M., Ja
´nosy, Z., and Laine, U. clarinet. Proc. 1992 Int. Computer Music Conf.
K. 1992. A real-time DSP implementation of a flute (ICMC’92), Addendum. San Jose, Califonia.
model. Proc.IEEE Int.Conf.on Acoustics,Speech,and
Signal Processing (ICASSP’92), Vol. 2, pp. 249–52. San
Francisco, California.
... For the creation of the virtual musical instruments and their use during the performance, the physical modeling method is going to be used as it provides not only more realistic but also more expressive synthetic sound (Välimäki, Takala 1996;Aramaki et al. 2001;Rabenstein, Trautmann 2001). The usual methods of sound synthesis (FM, additive, subtractive, AM, PD, Granular: Miranda 2002) try to reproduce the spectral content of the acoustic signal produced by a musical instrument, but their parameters are not related to the instrument's physical parameters. ...
... The first is responsible for the audio reconstitution of the produced sound in real-time, and we call it Acoustic Virtual Musical Instrument (AVMI) and the second one is the Visual Representation, which is usually a realistic three-dimensional representation. The most promising simulation method of AMIs is based on physical modeling algorithms that solve the system of equations that corresponds to the acoustics of the real musical instrument (Välimäki V, Takala 1996). A digitally simulated AMI has to produce a sound as similar as possible to the sound that the corresponding musical instrument makes and, moreover, to enable the ability to interact with it through an external physical apparatus. ...
Full-text available
A significant number of Ancient Musical Instruments (AMIs) are exhibited in archaeological museums all over the world. Organized sound (music and songs) was the prominent factor in the process of both formulating and addressing intellectual activity and artistic creation. Thus, the way AMIs sound is a key element of study for many scientific fields such as anthropology, archaeology, and archaeomusicology. Most of the time, the excavated instruments are not in good condition and rather fragile to move around (in order to perform studio recordings or exhibit them). Building replicas was the only way to study their performance. Unfortunately, replicas are not trivial to build and, once built, not modifiable. On the other hand, digitally simulated instruments are easier to build and modify (e.g., in terms of geometry, material, etc.), which is a rather important feature in order to study them. Moreover, the audio stimulus and the digital interaction with an AMI through a Graphical User Interface would give more engagement and knowledge to the museum’s visitor. In this work, we show the simulation methods of wind (classes: Aulos, Plagiaulos, Syrinx, and Salpinx) and string (classes: Phorminx, Chelys, Barbitos, Kithara, and Trigonon) Greek AMIs and the relevant built-applications useful to scientists and broader audience. We here propose a user-friendly, adaptable, and expandable digital tool which reproduces the sound of the above classes of AMIs and will: a) allow the museum scientists to create specific Auditory Virtual Musical Instruments and b) enrich the experience of a museum visitor (either in situ or on line) through a digital sound reconstruction and a 3D visual representation of AMIs, allowing real-time interaction and even music creation.
... State-of-the-art physical modeling (PM) techniques [8]- [10] have been put into practice by commercial companies (Native Instruments 1 , Modartt 2 , Applied Acoustic Systems 3 , etc.) to realistically sonify various popular instruments. Distinct simulation techniques of single-reed wind instruments have been proposed [11]- [15], leading to the development of commercial pieces of software (e.g., Swam Clarinets 4 ). ...
Full-text available
We present a simulation method for the auralization of the ancient Greek double-reed wind instrument Aulos. The implementation is based on Digital Signal Processing and physical modeling techniques for the instrument’s two parts: the excitation mechanism and the acoustic resonator with toneholes. Single-reeded instruments are in-depth studied firstly because their excitation mechanism is the one used in a great amount of modern wind-reed instruments and secondly because the physics governing the phenomena is less complicated than the double-reeded instruments. We here provide a detailed model of a system comprised of a double-reed linked to an acoustic resonator with toneholes to sonify Aulos. We validate our results by comparing our method’s synthesized signal with recordings from a replica of Aulos of Poseidonia built in our lab. The comparison showed that the fundamental frequencies and the first three odd harmonics of the signals differ 6, 5, 3, and 2 cents on average, respectively, which is below the Just Noticeable Difference threshold.
... To date indeed, thanks to the increase in computing resources, there exists a huge number of research trends and open issues in the field of Sound and Music Computing (SMC), and valuable overviews can be found in literature [9], where SMC research is divided into three main areas: sound, music, and interaction. The sound synthesis can be considered belonging to the first of the areas, and many of the existing techniques are aimed at simulating the wave equation solutions in a simplified manner [10]. ...
The present work deals with the synthesis of sounds produced by brass instruments through the direct physical modelling. The purpose is the development of an integrated methodology for the evaluation of the response of a wind instrument taking into account the properties of the surrounding environment. The identification of the frequency response of the resonator and the performing environment is obtained by means of a Boundary Integral Equation approach. The formulation produces the matrix transfer function between the inflow at the input section of the instrument bore and the signal evaluated at an arbitrary location, and can account for the response of any boundary and object present in the surroundings. The reflection function obtained from the above model is coupled to a simplified model of valve, used to represent the excitation mechanism behaviour. The exploited algorithm has demonstrated to be accurate and efficient in offline calculation, and the observed performance discloses the possibility to implement real–time applications.
... The trend of turning everything into a digital form has also impacted the musical domain, and in particular that of devices used in the production of electronic music [29], an endeavor that is commonly referred to as "virtual analog". Examples within this category include virtual analog synthesizers [22], virtual analog filters [30], virtual musical instruments [19,32], reverb emulations [31], and guitar amplifier models [21]. ...
... Those of the first category are based on the physical and acoustic events related to the sound production of a given instrument [2]. Through the analysis of these instruments it is possible to develop a system of equations to simulate this instrument in a realistic way. ...
Conference Paper
Full-text available
The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.
... Waveguide synthesis has been used since the second half of the 1980's to model a wide range of musical instruments [24][25][26][27]. The main advantages of this technique are its simplicity and efficiency while still sounding adequately real. ...
Full-text available
Two concepts are presented, extended, and unified in this paper: mobile device augmentation towards musical instruments design and the concept of hybrid instruments. The first consists of using mobile devices at the heart of novel musical instruments. Smartphones and tablets are augmented with passive and active elements that can take part in the production of sound (e.g., resonators, exciter, etc.), add new affordances to the device, or change its global aesthetics and shape. Hybrid instruments combine physical/acoustical and “physically informed” virtual/digital elements. Recent progress in physical modeling of musical instruments and digital fabrication is exploited to treat instrument parts in a multidimensional way, allowing any physical element to be substituted with a virtual one and vice versa (as long as it is physically possible). A wide range of tools to design mobile hybrid instruments is introduced and evaluated. Aesthetic and design considerations when making such instruments are also presented through a series of examples.
... Researchers working with interactive audio and haptic feedback have called their instruments VRMIs (see, e.g., Leonard et al. 2013); however, visual feedback was absent. In this article we distinguish, on the one hand, between virtual musical instruments (VMIs), defined as software simulations or extensions of existing musical instruments with a focus on sonic emulation, for example, by physical modeling synthesis (Välimäki and Takala 1996) and, on the other hand, virtual reality musical instruments (VRMIs), where the instruments also include a simulated visual component delivered using either a head-mounted display (HMD) or other forms of immersive visualization systems such as a CAVE (Cruz-Neira, Sandin, and De Fanti 1993). We do not consider those instruments in which 3-D visual objects are projected using 2-D projection systems, such as the environment for designing virtual musical instruments with 3-D geometry proposed by Axel Mulder (1998) and the tabletop based systems proposed by Sergi Jordà (2003). ...
The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term “virtual musical instruments” has been used for a long time to describe software simulations, extensions of existing musical instruments, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field overview of VRMIs from the viewpoint of the performer. We propose nine design guidelines, describe evaluation methods, analyze case studies, and consider future challenges.
Full-text available
Este artigo traz uma reflexão sobre as categorias de análise que são usadas para obras de diferentes poéticas e estilos. Lembramos que a taxonomia foi primeira categoria de análise a ser usada. Continua sendo muito empregada, como por exemplo na análise harmônica tradicional. A segunda categoria de análise é a funcional, empregada por exemplo na análise harmônica riemanniana. Essa categoria se reinventa na estratégia da engenharia reversa como a que é usada na contagem da série dodecafônica. A terceira categoria é da hermenêutica que também envolve estratégias heurísticas. Ela tem sido a base de diversas teorias analíticas recentes como a das Tópicas e da Narratividade. Finalmente reconhecemos como quarta categoria a estratégia da modelagem, aplicada às obras cuja análise é resistente às categorias anteriores. Propomos que a análise do timbre é particularmente afeita à estratégia da modelagem, reconhecendo que a técnica da modelagem física por meios digitais oferece uma perspectiva promissora nessa direção.
The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term "virtual musical instruments" has been used for a long time to describe software simulations, extensions of existing musical instruments, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field overview of VRMIs from the viewpoint of the performer. We propose nine design guidelines, describe evaluation methods, analyze case studies, and consider future challenges.
Conference Paper
This paper describes the development of TheStringPhone, a physical modeling based polyphonic digital musical instrument that uses the human voice as input excitation. The core parts of the instrument include digital filters, waveguide sections and feedback delay networks for reverberation. We describe the components of the instrument and the results of an informal evaluation with different musicians.
Conference Paper
Full-text available
This paper deals with waveguide simulation of acoustic tube systems that are constructed of conical tube sections. Digital reflection filters needed for modeling the scattering that occurs at a junction of conical tubes are presented. It is known that the reflection filter can be unstable in certain cases. However, the overall system is stable if it corresponds to a physically realizable system. Furthermore, a fractional delay waveguide model (FDWM), where the length of each conical tube section can be accurately adjusted, is introduced. The methods described in this paper are directly applicable to the waveguide synthesis of wind instruments.
Conference Paper
Full-text available
The paper points out the limitations of MIDI control of the sound synthesis of the acoustic guitar, and presents ideas and solutions how to overcome them.
Conference Paper
Full-text available
The sound quality of real-time synthesis based on physical models has so far been inferior to sampling techniques. In this paper we introduce new principles to make model-based sound synthesis of the guitar and other plucked string instruments more attractive from the viewpoint of sound quality. A major improvement is achieved by estimating the model parameters and the excitation signal from the sound of an acoustic instrument. It is shown that the impulse response of the body is included in this excitation. More complex string behavior, including nonlinearities in some instruments, is briefly studied. Furthermore, different aspects of controlling the real-time synthesis model are discussed. High-quality real-time synthesis is shown to be feasible by using a single digital signal processor.
A new application of the well-known process of frequency modulation is shown to result in a surprising control of audio spectra. The technique provides a means of great simplicity to control the spectral components and their evolution in time. Such dynamic spectra are diverse in their subjective impressions and include sounds both known and unknown.
Formant-wave-Funct ion (FOF) synthesis is a method for directly calculating the amplitude of the waveform of a signal as a function of time. Many signals can be modeled as a pair: Excitation Function-Parallel Filter. In the FOF method this pair is replaced by a unique formula describing in a more or less approximate way the output of the filter. Two tynes of advantages have motivated the first uses of the FOF technique: on the one hand, in certain cases, the FOF formula can beplified to a point where calculations are fast and easy ([1], [2], [4],[5]). On the other hand, the FOF method allows modeling of signals without the need of separating “a priori” the excitation function and the filter ([3], [53]).