Article

Digital Synthesis of Complex Spectra by means of Multiplication of Non-linear Distorted Sine Waves

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A technique for the synthesis of complex dynamic spectra is described. The first step in the synthesis method is to distort sine waves with nonlinear transfer functions. The resulting spectra depend upon the input amplitudes and the nature of the distortions. By multiplying one such distorted source by another a spectrum is obtained that is the convolution of the spectra of the distorted sine waves. By varying the amplitudes of the input sine waves one can produce complex spectral evolutions. Harmonic as well as inharmonic spectra can be produced, and control over the formant structure is provided. With one distorted and one pure wave, results similar to those of J. M. Chowning's frequency-modulated technique are produced. With several distorted signals very complex spectral evolutions are possible. Various implementations using a computer or special-purpose digital device are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... This implementation is therefore particularly efficient. Le Brun [2] and Arfib [3] recently studied the case of a polynomial shaping function with a sinusoidal input signal. Reinhard [4] extended these results to an input sum of two sinusoidal signals of different frequencies . ...
... Therefore a> is required . The function will be F(x) (3) x -a and its Fourier coefficients result in the expression (see Appendix) ...
... The polynomial distortion is easily interpreted by expanding the polynomial into a sum of Chebyshev polynomials [2], [3] . If ...
Article
Full-text available
A nonlinear sound synthesis technique that uses a sinusoidal input signal and a rational shaping function is described. Complex spectrum evolutions are easily obtainable by varying different parameters. The global spectrum shape is essentially defined by only two parameters, those controlling the bandwidth and the formant position. Multiplication by a carrier allows harmonic and inharmonic spectra to be obtained.
... However, it is possible to give a practical realization of the reconstruction filter by an impulse response that approximates the sinc function. Whenever the condition (8) is violated, the periodic replicas of the spectrum have components that overlap with the base band. This phenomenon is called aliasing or foldover and is avoided by forcing the continuous-time original signal to be bandlimited to the Nyquist frequency. ...
... In the continuous-time case, the system is stable if all the poles are on the left of the imaginary axis or, equivalently, if the strip of convergence (see appendix A.8.1) ranges from a negative real number to infinity. In the discretetime case, the system is stable if all the poles are within the unit circle or, equivalently, the ring of convergence (see appendix A. 8.3) has the inner radius of magnitude less than one and the outer radius extending to infinity. ...
... Often, the continuous-time impulse response is derived from a decomposition of the transfer function of a system into simple fractions. Namely, the transfer function of a continuous-time system can be decomposed 8 ...
Article
Full-text available
... Being able to adjust all the parameters (gains, filter parameters, transfer function of the waveshaper, etc.) is crucial to fine tune this stage using WebAudio nodes. We strongly advise the player to watch this YouTube video 11 which shows the differences in sound and dynamics with and without this loop (Power Amp on / off) in our simulation. It sounds and plays very well (we made several user evaluations [1,2] that showed that guitarists, even professional ones, liked the way the dynamics of a real amp was simulated). ...
... The most straightforward method for obtaining signal distortion with digital devices is to apply an instantaneous nonlinear transformation, using a so-called "transfer function" from the input signal to the output variable. This type of timbre alteration is coined waveshaping[11,12].3 WebAssembly is a W3C standard: a portable binary-code format for executable programs, firstly to be used on the Web, but also on native environments. ...
... https://jsbin.com/zotaver/edit10 Aiken -The last word on Biasing: https://www.aikenamps.com/the-last-word-on-biasing11 WebAudio implementation demo of the PowerAmp stage: https://www.youtube.com/watch?v=-NdMdJQx2BwIFC-4Proceedings of the 2 nd International Faust Conference, Maison des Sciences de l'Homme Paris Nord, Saint-Denis, France,December 1-2, 2020 ...
Conference Paper
Full-text available
In this paper, we detail our ongoing browser-based recreations of famous tube guitar amplifiers and describe the JavaScript implementations we have been developing using the WebAudio API. The tricky part of such amplifiers is the power stage (Power Amp) which contains a parametric negative feedback loop. We show the limits of the high-level WebAudio API layer, and how FAUST allows us to re-implement the Power Amp part more faithfully. Finally we also compare FAUST vs JavaScript development, and mention future optimizations.
... The first documented use of waveshaping in the digital domain can be traced back to 1969, when Jean-Claude Risset emulated the sound of a clarinet by distorting a sinusoid with a clipping function [2]. Waveshaping techniques were extensively researched within the context of computer music in the 1970s, with several authors exploring the use of Chebyshev polynomials in particular, as an accurate and computationally cheap alternative to additive synthesis [1,[3][4][5]. The underlying principles behind waveshaping synthesis are closely related to other well-known synthesis techniques, such as frequency modulation (FM) and phase distortion (PD) synthesis [6,7]. ...
... A major challenge in VA modeling of nonlinear circuits, and digital waveshaping in general, is aliasing suppression. Early research on waveshaping synthesis addressed this issue by using low-order polynomial transfer functions, which not only allowed full parametric control of the produced spectrum but also ensured that the output waveform was bandlimited [4]. In VA modeling, high oversampling factors are usually necessary to prevent harmonics introduced by nonlinearities from reflecting into the baseband as aliases [13]. ...
... where f () is the transfer function of the system and t is time. In the synthesis literature, the term "transfer function" is commonly used to denote the waveshaping function [4]. It should not be confused with the s-and z-domain transfer functions used in linear system analysis. ...
Article
Full-text available
Wavefolders are a particular class of nonlinear waveshaping circuits, and a staple of the “West Coast” tradition of analog sound synthesis. In this paper, we present analyses of two popular wavefolding circuits—the Lockhart and Serge wavefolders—and show that they achieve a very similar audio effect. We digitally model the input–output relationship of both circuits using the Lambert-W function, and examine their time- and frequency-domain behavior. To ameliorate the issue of aliasing distortion introduced by the nonlinear nature of wavefolding, we propose the use of the first-order antiderivative method. This method allows us to implement the proposed digital models in real-time without having to resort to high oversampling factors. The practical synthesis usage of both circuits is discussed by considering the case of multiple wavefolder stages arranged in series.
... Previous research on VA modeling of synthesizer circuits has concentrated on VCFs [6,7,8,9,10], oscillators [11,12,13], and effects processors [4,14,15]. Of related interest to this study is the pioneering work done during the 1970s on digital waveshaping synthesis [16,17,18]. This type of synthesis (much like West Coast synthesis) exploited the use of nonlinear waveshaping, e.g., via Chebyshev polynomials, to create harmonically-rich sounds from sinusoidal waveforms. ...
... Control parameters IA and IB where kept constant between stages. The DC blocker (17) was used in between each stage. As shown in these plots, the cascaded configuration no longer operates as originally intended. ...
Conference Paper
Full-text available
The Serge Triple Waveshaper (TWS) is a synthesizer module de- signed in 1973 by Serge Tcherepnin, founder of Serge Modular Music Systems. It contains three identical waveshaping circuits that can be used to convert sawtooth waveforms into sine waves. However, its sonic capabilities extend well beyond this particular application. Each processing section in the Serge TWS is built around what is known as a Norton amplifier. These devices, unlike traditional operational amplifiers, operate on a current differencing principle and are featured in a handful of iconic musical circuits. This work provides an overview of Norton amplifiers within the context of virtual analog modeling and presents a digital model of the Serge TWS based on an analysis of the original circuit. Results obtained show the proposed model closely emulates the salient features of the original device and can be used to generate the complex waveforms that characterize “West Coast” synthesis.
... Terms like overdrive, distortion, fuzz and buzz are used to describe similar effects to distorting the w aveform of audio signals. The easiest memoryless way to design distortion-type effects is by waveshaping [Schaefer 1970] [Arfib 1979] [LeBrun 1979][De Poli 1984 [Fernadez-Cid Quiros 2001]. In chapter 7 we will present several transfer characteristics found in the relative literature. ...
... The most common way to distort a signal is by waveshaping [Schaefer 1970] [Arfib 1979 figure 7.38 with its signal processing block. ...
Thesis
Full-text available
Η παρούσα διατριβή προτείνει την χρήση των τεχνικών φυσικής μοντελοποίησης για την σχεδίαση αλγορίθμων ψηφιακής επεξεργασίας ήχου προορισμένων για μουσική δημιουργία. Η βασική ιδέα στην οποία στηρίχτηκε είναι η παρουσίαση ενός νέου και καινοτομικού τρόπου ελέγχου αλγορίθμων επεξεργασίας ήχου βασισμένου στο πρότυπο αλληλεπίδρασης με τον όρο: αλληλεπίδραση οργάνου (instrumental interaction). Αναζητήθηκε ένας φορμαλισμός φυσικής μοντελοποίησης ο οποίος να επιτρέπει άμεσα την έρευνα για νέα ηχοχρώματα μέσω τεχνικών επεξεργασία ήχου. Συνεπώς η παρούσα έρευνα έχει σαφέστατο μουσικό προσανατολισμό.Η εργασία παρέχει νέα αποτελέσματα και θέτει νέα ερωτήματα σχετικά με τη επεξεργασία ήχου για μουσική δημιουργία και για την σχεδίαση ψηφιακών ακουστικών εφφέ (digital audio effects). Καινοτομία αποτελεί η εισαγωγή χειροvομιών οργάνου (instrumental gesture) για το έλεγχο των αλγόριθμων επεξεργασίας ήχου. Η ίδια ιδέα εφαρμόστηκε και στον αλγόριθμο σύνθεσης ήχου διαμόρφωσης κατά συχνότητα FM (frequency modulation).
... Another implementation -one that I used most often -takes advantage of the fact that functional iteration synthesis can be seen as a kind of generalized waveshaping synthesis. The operation of waveshaping (Arfib, 1979) is the same as the first iterate of mapping with Chebishev polynomials. The first iteration of the sine map is like using a sinusoidal transfer function as the waveshaper. ...
Article
Full-text available
This paper overviews an approach of digital sound synthesis based on iterated nonlinear functions, and describes its use in the creation of sounds of textural type. Due to the nonlinear dynamics in the iterated process, time-changing sonorities are synthesized reminiscent of environmental sound events and effects of “acoustic turbulence.” This opens to the modeling of perceptual attributes of complex auditory images. The experiments documented here are drawn from the author's computer music composing. They may extend to the creation of synthetic, but credible, auditory scenes in multimedia applications, virtual reality and film soundtracks. However, the paper mainly emphasizes a systemic view of sound synthesis. By arguing that special attention should be payed to the dynamics internal to the sound generating mechanism, it raises the issue of the ecological relevance of the sound synthesis techniques to an approach of perceptual modeling.
... Le travail présenté ici entre dans le cadre de la synthèse du son qui a fait l'objet d'une certaine variété d'inventions dans les premières décennies, depuis la synthèse par tables d'ondes et blocs fonctionnels (MUSIC V), par modulation de fréquence [9], par distorsion non-linéaire [2], la synthèse granulaire [13] pour ne citer que les principales. La synthèse par modèle physique [1,3,15], quant à elle, bien qu'elle puisse se présenter en effet comme une méthode de synthèse, correspond toutefois à un changement de paradigme profond. ...
Article
RÉSUMÉ La synthèse sonore par modèle physique particulaire développée au laboratoire ICA et à l'ACROE avec le langage CORDIS-ANIMA et le logiciel de création sonore et musicale GENESIS se présente aujourd'hui comme un paradigme général susceptible de constituer le coeur d'un environnement complet pour la création musicale, de la création du son à la composition macro-temporelle et macro structurelle. Le « problème inverse » dans ce contexte se pose lors d'une des phases possibles du processus de création : étant donné un résultat (sonore, simple ou complexe) fixé comme cible, quel modèle physique (dans tout ce qui le caractérise) mettre en jeu pour l'obtenir ? Plus généralement, il s'agit de déterminer des méthodes permettant de définir le plus complètement possible un processus générateur à partir d'un ensemble de connaissances sur ce qu'il doit engendrer. Cet article vise à formaliser cette problématique inverse ainsi qu'à en exposer les premières résolutions pratiques.
... Other related methods have proved to be efficient for signal synthesis, such as the waveshaping technique [Arfib, 1979] [Lebrun, 1979]. I shall briefly recall the mathematics of this method since it will be used to model the deterministic part of a source signal in chapter 6. ...
Thesis
Full-text available
Sound modeling consists in designing synthesis models to reproduce and manipulate natural sounds. The aim of this work is to define sound models taking into account physical aspects linked to the sound source and their perceptive influence. For this purpose, a combination of physical and signal models has been used. The document starts with a presentation of the most important synthesis methods. Further on, one searches for a correspondence between the synthesis parameters and the data obtained through the analysis. The non-stationary nature of sound signals necessitates the consideration and the adaptation of analysis methods like time-frequency representations. The parameters resulting from the analysis can directly feed an additive synthesis model. An application to flute sounds corresponding to this kind of modeling is presented. Models simulating the wave propagation in the medium are further designed to give more importance to the physical characteristics of the sound generating system. Stretched strings and tubes were here considered. By comparing the solutions of the movement equations and the response of the so- called waveguide system, one constructs an estimation method for synthesis parameters. Dispersive and dissipative effects due to the medium in which the waves propagate are then taken into account. For sustained sounds the source and the resonator have been separated by deconvolution. By using an adaptive filtering method, the source signal is decomposed in two contributions: a deterministic component and a stochastic component. The modeling of the deterministic part, whose behavior generally is non-linear, necessitates the use of global synthesis methods like waveshaping, and perceptive criteria such as the Tristimulus criterion. The stochastic component is modeled taking into account the probability density function and the power spectral density of the process. An example of real-time control of a flute model is presented. A flute equipped with sensors is used as an interface to control the proposed model. Possibilities of intimate sound manipulations obtained by acting on the parameters of the model are discussed.
... They are, however, not adapted to precise signal control, since slight parameter changes induce radical signal transformations. Other related methods such as waveshaping techniques (Arfib 1979;Le Brun 1979) have also been developed. ...
Book
Full-text available
Different trends and perspectives on sound synthesis control issues within a cognitive neuroscience framework are addressed in this article. Two approaches for sound synthesis based on the modelling of physical sources and on the modelling of perceptual effects involving the identification of invariant sound morphologies (linked to sound semiotics) are exposed. Depending on the chosen approach, we assume that the resulting synthesis models can fall under either one of the theoretical frameworks inspired by the representational-computational or enactive paradigms. In particular, a change of viewpoint on the epistemological position of the end-user from a third to a first person inherently involves different conceptualizations of the interaction between the listener and the sounding object. This differentiation also influences the design of the control strategy enabling an expert or an intuitive sound manipulation. Finally, as a perspective to this survey, explicit and implicit brain-computer interfaces (BCI) are described with respect to the previous theoretical frameworks, and a semiotic-based BCI aiming at increasing the intuitiveness of synthesis control processes is envisaged. These interfaces may open for new applications adapted to either handicapped or healthy subjects.
... In music research, the spectral envelope is often created by a non-linear function, such as frequency modulation (FM) [Chowning 1973], and a great number of similar techniques [Arfib 1978], [le Brun 1979], [Mitsuhashi 1982], [de Poli 1984], which, while generating complex spectra with low processor cost, generally lacked both analysis techniques, and intuitive control. Many attempts have been made to match the parameters of a processoreffective algorithm, such as the FM, to the parameters of an acoustic sound. ...
... The simplest digital implementations of guitar distortion use a static nonlinearity, borrowing from classical waveshaping synthesis techniques (Arfib, 1979;Le Brun, 1979). The static nonlinearity is usually a lookup table, or a polynomial (e.g., spline fit) of an arbitrary function that saturates and clips. ...
... On pourra citer toutefois la technique de synthèse par distorsion non linéaire,également appelée waveshaping synthesis [Arfib, 1979] [LeBrun, 1979] [Roads, 1979]. Dans ce type de synthèse, un signal initial est traité par un filtreà fonction de transfert non linéaire, ce qui permet de produire une grande variété de timbres, dépendant des variations des quelques paramètres de cette fonction de transfert. ...
Article
In this article we describe our ongoing research and development efforts towards integrating the control of sound spatialisation in computer-aided composition. Most commonly, the process of sound spatialisation is separated from the world of symbolic computation. We propose a model in which spatial sound rendering is regarded as a subset of sound synthesis, and spatial parameters are treated as abstract musical materials within a global compositional framework. The library OMPrisma is presented, which implements a generic system for the control of spatial sound synthesis in the computer-aided composition environment OpenMusic.
... A storm, for example, may need to wax and wane through rage calm as the story unfolds. A range of techniques from physical modelling (Cook, 1997;Smith, 1992) acoustic modelling (Arfib , D., 1979;Horner, Beauchamp, & Haken, 1993;Serra, 1997;Wyse, 2004), and sample based techniques can be used to provide flexible, interactive, and when appropriate, realistic sounds under the real-time control of a story teller. ...
Article
Full-text available
The traditional practice of oral storytelling has particular characteristics that make it amenable to extending with interactive electroacoustic sound. Recent developments in mobile device and sound generation technologies are also lend themselves to the particular practices of the traditional art form. This paper establishes a context for interactive sound design in a domain that has been little explored in order to create an agenda for future research. The goal is to identify the opportunities and constraints for sound particularly suited to live storytelling, and to identify criteria for evaluating interaction designs. The storytelling domain addressed includes not only particular instances of telling, but also the variability of stories between tellings and tellers, as well as the mechanisms by which stories are passed between tellers. The outcome of the research will be a computer-based platform providing storytellers with the ability to create auditory scenes, sonic elements, and vocal transformations that are controllable in real time in order to support the telling, retelling, and sharing of stories.
... By changing the scaling of this function can be used to alter the number of higher harmonics. This approach is similar to waveshaping synthesis (Arfib 1979;Le Brun 1979). ...
... The use of nonlinear distortion to generate complex sounds has been widely studied within the context of digital synthesis. Well-known methods include the use of nonlinear waveshaping functions, such as Chebyshev polynomials, to expand the spectrum of simple sinusoids [6][7][8][9], and frequency modulation (FM) synthesis [10]. Other methods include modified FM synthesis [11], bitwise logical modulation and vector phaseshaping synthesis [12,13]. ...
Conference Paper
Full-text available
An antialiased digital model of the wavefolding circuit inside the Buchla 259 Complex Waveform Generator is presented. Wave-folding is a type of nonlinear waveshaping used to generate complex harmonically-rich sounds from simple periodic waveforms. Unlike other analog wavefolder designs, Buchla's design features five op-amp-based folding stages arranged in parallel alongside a direct signal path. The nonlinear behavior of the system is accurately modeled in the digital domain using memoryless mappings of the input-output voltage relationships inside the circuit. We pay special attention to suppressing the aliasing introduced by the non-linear frequency-expanding behavior of the wavefolder. For this, we propose using the bandlimited ramp (BLAMP) method with eight times oversampling. Results obtained are validated against SPICE simulations and a highly oversampled digital model. The proposed virtual analog wavefolder retains the salient features of the original circuit and is applicable to digital sound synthesis.
... Figura 4: Waveshaper proposto por Araya e Suyama [1]. O waveshaping estático é o método mais simples de se obter uma distorção não linear, sendo considerado uma técnica clássica de síntese sonora digital [2], [20]. Para um waveshaper qualquer, X é o conjunto contendo as amostras a serem processadas, as quais podem ser extraídas do sinal digitalizado do instrumento musical, x n é o valor da n-ésima amostra, sendo x n ∈ X, e f (x) a saída da função de waveshaping. ...
Article
Full-text available
Este trabalho tem como objetivo revisar e simular alguns dos métodos de emulação computacional de efeitos de distorção de guitarras elétricas e de amplificadores valvulados que tem caracterizado os timbres deste instrumento nos últimos 50 anos. Recentemente, através da disseminação do processa- mento digital de sinais, estas distorções têm sido reproduzidas através de softwares embarcados ou na forma de pluggins em softwares de estúdio. Para a simulação de distorções, duas abordagens são basicamente utilizadas: caixa preta e caixa branca. Na primeira, waveshapers estáticos simplificam o sistema através de equações não-lineares que aproximam o comportamento do aparelho. Algumas das equações presentes na literatura são simuladas neste trabalho, com as respectivas respostas dos sistemas não-lineares no tempo e na frequência. Na abordagem caixa branca, os parâmetros dos circuitos são levados em conta e a modelagem pode ser realizada através de Wave Digital Filters ou de sistemas de equações diferenciais ordinárias em sua representação no espaço de estados, os quais são solucionados através de métodos numéricos. As simulações caixa branca apresentam re- sultados mais precisos, porém demandam maiores recursos computacionais, sendo necessário um compromisso entre precisão e eficiência para a simulação em tempo real.
... Le son est dans ce cas produit grâce à un algorithme obtenu à l'aide La synthèse par distorsion non-linéaire. Aussi connue sous le nom de « Waveshaping Synthesis », son principe repose sur une modulation d'un signal sinusoïdal, mais, dans ce cas, par une fonction non-linéaire faisant office d'index [Arfib 1979], [Beauchamp 1979]. ...
Thesis
Un "problème inverse", dans son sens général, consiste en une «inversion» de la relation de cause à effet. Il ne s'agit pas de produire un phénomène «cause» à partir d'un phénomène «effet», mais plutôt de s'essayer à définir un phénomène «cause» dont un effet observé serait la conséquence.Dans le contexte du formalisme de modélisation physique et de simulation CORDIS-ANIMA, et plus particulièrement dans le cadre de l'interface de création sonore et de composition musicale qui le met en œuvre, GENESIS, créés par le laboratoire ACROE-ICA, on identifie une problématique d'une telle nature : étant donné un phénomène sonore, quel modèle physique construire qui permettrait de l'obtenir ? Cette interrogation est fondamentale dans le cadre du processus de création engagé par l'utilisation de tels outils. En effet, pouvoir décrire et concevoir le procédé qui permet d'engendrer un phénomène ou un événement sonore (musical) préalablement définis est une nécessité inhérente à l'acte de création musicale. Réciproquement, disposer des éléments d'analyse et de décomposition de la chaîne de production du phénomène sonore permet d'envisager, par représentation, traitement direct, composition des éléments de cette décomposition, la production de phénomènes très riches, nouveaux, expressifs et présentant une cohérence intime avec les sons naturels sur lesquels l'expérience perceptive et cognitive est construite.Dans l'objectif d'aborder cette problématique, nous avons dû formuler et étudier deux des aspects fondamentaux qui la sous-tendent. Le premier concerne la description même du résultat final, le phénomène sonore. Celle-ci pouvant être de plusieurs natures et souvent difficile en termes objectifs et quantitatifs, notre approche a tout d'abord consisté à réduire le problème aux notions de contenu spectral, ou encore de « structure modale » définis par une approche phénoménologique de type signal. Le second concerne la nature fonctionnelle et paramétrique des modèles construits au sein du paradigme CORDIS-ANIMA. Étant, par essence, une métaphore du contexte instrumental, tout modèle doit alors être conçu comme la mise en interaction d'un couple « instrument/instrumentiste ». De ces spécifications nous avons alors pu définir UN problème inverse, dont la résolution a demandé la mise au point d'outils d'interprétation de données phénoménologiques en données paramétriques. Ce travail de thèse a finalement abouti à la mise en œuvre de ces nouveaux outils au sein même du logiciel de création GENESIS, ainsi que dans l'environnement didactique qui l'accompagne. Les modèles qui en résultent, répondent à des critères de cohérence, de clarté et ont pour première vocation d'être réintégrés au processus de création. Ils ne constituent pas une finalité en eux-mêmes, mais un appui proposé à l'utilisateur pour compléter sa démarche.En conclusion de ce travail, nous détaillons les directions pouvant être suivies à des fins d'extension ou éventuellement de reformulation de cette problématique.
... On the contrary, using a different_ kind of approach, one can transform the generated signal. The most important ways of doing that, are filtering (subtractive synthesis), wayeshaping [2,3] and modulation (multiplicative ·synthesis and frequency modulation [4]). The last one is particularly efficient since the variation of few parameters influences the general characteristic of the produced spectrum, controlling both amplitude and harmonic relationships among the various partial waves. ...
Article
Full-text available
The microelectronics and computer science evolution allows, today, the realization of digital systems which produce sound at low cost. In this paper a synthesis technique particularly efficient in the realization of sound, is presented. It combines the characteristics of non-linear synthesis techniques with those which build the waveform in the time. Implementation modalities, expecially regarding envelope generation, are disc.ussed and cri- teria to choose the sound parameters are given.
... In most cases, the nonlinear behavior is correlated with the excitation, and we assume this to be the case here. To model these nonlinearities, we used a global synthesis method, namely the waveshaping method (LeBrun 1979;Arfib 1979), because it provides a useful means to generate complex spectra from easy calculations by performing only a small number of operations. This method consists of distorting a sinusoidal function with an amplitude function I(t) (called the index of distortion) with a nonlinear function g . ...
Article
Full-text available
An abstract is not available.
... The tracker seems to have problems to appropriately resolve rapid movements of the target, this fact translating into either (a) a flat response in parts where the rest of the trackers show clear maxima and minima (see for instance at time 0.05 s) or (b) simply an absence of them. The sudden local flatness in the signal produces non linearities (distortions) that ultimately show up in the spectrum on the right as harmonics [66]. They appear for example as peaks at 2f 1 = 180 Hz or 2f 2 = 860 Hz. ...
Thesis
Full-text available
Vibration analysis, also known as modal analysis, is the field of measuring and analysing the dynamic response of structures and or fluids during vibration excitation. Traditional modal analysis techniques are either expensive, require a complex set up or both. Lower cost, non-contact measuring devices like high-speed cameras in combination with computer vision techniques have been shown to be a valid alternative approach for measuring vibrations in structures. In this work, a number of both classic and stateof-the-art computer vision tracking algorithms are compared and their ability to track the oscillatory movement that characterises vibrations is tested on high-speed videos under different experimental conditions. It is shown that vibrations covering a few pixels in a video can be easily resolved by most trackers, but quantisation problems due to limited spatial resolution start to appear when dealing with vibration amplitudes close to the pixel. In such cases, it seems that adding a fairly large amount of dither to the frames can help to mitigate those problems. In the sub-pixel level most of the algorithms considered start to fail, with the exception of the Median Flow, an optical flow-based tracker that consistently shows its robustness and high spatial resolution performance. An interactive tool implementing this tracker has been created for the purposes of modal analysis. The tool can load a video and perform tracking on specific user-selected regions, showing the outcome in both time and frequency domains. It also permits the creation of frequency color maps whose goal is twofold: to reveal what parts of a video are more suitable for modal analysis and also which ones vibrate more energetically. A few real life examples are put in place in order to show the capabilities of the tool and demonstrate its performance.
... Such waveforms can be created from theoretical bases, or else generated from recording of the real sounds [11][12][13][14][15]. The feed-forward techniques are instead based on the alteration a posteriori of existing waveforms produced by oscillators: the input waveforms may have a high level of complexity and subsequently are simplified, or can be simple and then combined to each other or filtered [16][17][18][19]. It must be emphasised that the waveform can not fully characterise the timbre of a complex sound, but its time envelope turns out to be of paramount relevance to catch the sound properties (differentiating the percussive from the non-percussive sounds, with all the intermediate gradations). ...
Article
The present work deals with the synthesis of sounds produced by brass instruments through the direct physical modelling. The purpose is the development of an integrated methodology for the evaluation of the response of a wind instrument taking into account the properties of the surrounding environment. The identification of the frequency response of the resonator and the performing environment is obtained by means of a Boundary Integral Equation approach. The formulation produces the matrix transfer function between the inflow at the input section of the instrument bore and the signal evaluated at an arbitrary location, and can account for the response of any boundary and object present in the surroundings. The reflection function obtained from the above model is coupled to a simplified model of valve, used to represent the excitation mechanism behaviour. The exploited algorithm has demonstrated to be accurate and efficient in offline calculation, and the observed performance discloses the possibility to implement real–time applications.
Article
Considering the process of artistic creation to be deeply linked with technology, we propose a conceptual framework which gives a sense to the concept of artistic creation tools in the context of the computer. Starting from an initial, technology-free situation, we introduce the notion of musical instrument as the first appearance of technology in music. Then, introducing step by step the important technological mutations we characterize the creative process that is supported by each stage and the transaction or ‘trade-off’ which accompanies each change. Under the light of this analysis we discuss interactive multisensory simulation of physical objects, including gestural real-time interactions, as we have been developing for years in our laboratory. An important issue within this article is an attempt to finish with some harmful confusion which comes from too systematically classifying certain situations as instrumental under the pretext that they appeal to gesture and real-time sound production or processing. We present through the concepts of ‘supra-instrumental gesture and interaction’, several situations that, in non-real time (but also possibly in real-time), and even in the total absence of actual gesture (but also achieved with actual gestures), are more gestural and more instrumental than certain gesticulations effected with sophisticated input devices and real-time sound digital processing.
Article
Full-text available
Music V is one of the first programs dedicated to sound creation. During the seventies, computers were rather slow, and the possibility of performing real time sound synthesis on the computers available at that time was only a dream. However, many concepts were already being developed based on concepts such as curves, musical intention, mapping, and data reduction, which the latest computers are now able to cope with. Thanks to the latest developments in computer music software and hardware, it is now possible to recreate these sounds in real time and to interpret them. The idea of interpreting computer music (including sounds stored in archives, where appropriate) via gestural control devices gives rise in this article to some reflections on the meaning of "interpretation".
Article
It may seem surprising to find a presentation on the computer and music in a scientific meeting dedicated to new developments in the field of time-frequency methods. But music is both a demanding and a rewarding field: it has benefited from science and technology, but it has also stimulated several scientific and technical developments. In 1957, Max Mathews pioneered at Bell Laboratories digital recording and synthesis of sound: his primary interest was the development of novel musical instruments. The exploration of the virtually unlimited resources of synthesis and processing has involved research that has completely transformed our understanding of musical sound and how it is perceived. It is not surprising that the wavelet transform was first applied to sound signals in a computer music team, namely our “Equipe d’Informatique musicale”; to implement this application efficiently, Richard Kronland-Martinet took advantage of the SYTER audioprocessor, developed specially for music.
Article
We describe the development and implementation of a family of non-linear filters in which the non-linear term is given a variable recursive delay, in the context of a search for what we term an arithmetical instrument -a n instrumentwhosedesignis foundedonpurelynumerical models. Such filters are inherently unstable; we describe how they may be controlled, not only by the careful selection and constraining of parameters, but also by appropriate andidiomaticperformancetechnique. We drawcompar- isons between these filters and the behaviour of acous- tic instruments, which typically exhibit what we call ex- citable regions of sonic activity.
Article
Full-text available
The utilization of digital techniques for musical sound production. Some of these are the digital equivalents of techniques employed in analog synthesizers and in other fields of electrical engineering. Other techniques have been specifically developed for digital music devices and are peculiar to these. The fundamentals of the main digital synthesis tecniques are introduced. Mathematical developments have been restricted in the exposition and can be found in the papers listed in the references. To simplify the discussion, whenever possible, the techniques are presented with reference to continuous signals. 50 refs.
Conference Paper
Full-text available
This paper introduces the Vector Phaseshaping (VPS) synthesis technique, which extends the classic Phase Distortion method by providing flexible means to distort the phase of a sinusoidal oscillator. This is achieved by describing the phase distortion function using one or more breakpoint vectors, which are then manipulated in two dimensions to produce waveshape modulation at control and audio rates. The synthesis parameters and their effects are explained, and the spectral description of the method is derived. Certain synthesis parameter combinations result in audible aliasing, which can be reduced with a novel aliasing suppression algorithm described in the paper. The extension is capable of producing a variety of interesting harmonic and inharmonic spectra, including for instance, formant peaks, while the two dimensional form of the control parameters is expressive and is well suited for interactive applications.
Article
Full-text available
The wavelet transform is a recent method of signal analysis and synthesis. It decomposes an arbitrary function again into a two-parameter family of elementary wavelets that are obtained by shifts in the time variable but also by dilations (or compressions) that act both on the time and the frequency variables. We can perform a number of sound modifications by altering the wavelet transform coefficients. Such possibilities include slowing down or speeding up the sound without pitch transpositions, time-varying filtering, and cross-synthesis between two sounds by resynthesis with the modulus information of one sound and the phase information of another sound.
Article
A common approach in the development of digital filters is to begin with an existing analog filter and produce an equivalent computer program to realize it. This may involve, at the extreme, the detailed analysis of circuit behavior, or it may stem from a higher-level approach that looks at block diagrams and s-domain transfer functions. In this article, we first take the latter approach to develop a set of linear filters from the well-known state variable filter. From this we obtain a first result, which is a linear digital implementation of the Steiner design, comprising separate inputs for different frequency responses and a single output summing the responses. Turning back to the state variable design, we show that to develop a nonlinear version, an analog circuit realization can be used to identify positions in which to insert nonlinear waveshapers. This gives us our second result, a nonlinear digital state variable filter. From this analog-derived design, we then propose modifications that go beyond the original filter, developing as a final result a structure that could be classed as a hybrid of filter and digital waveshaper. As part of this process, we ask the question of whether an approach that takes inspiration from the analog world, while being decoupled from it, may be more profitable in the long run than an obsession with detailed circuit modeling.
Article
This is the second part of a two-part paper that presents a procedural approach to derive nonlinear filters from schematics of audio circuits for the purpose of digitally emulating musical effects circuits in real-time. This work presents the results of applying this physics-based technique to two audio preamplifier circuits. The approach extends a thread of research that uses variable transformation and offline solution of the global nonlinear system. The solution is approximated with multidimensional linear interpolation during runtime to avoid uncertainties in convergence. The methods are evaluated here experimentally against a reference SPICE circuit simulation. The circuits studied here are the bipolar junction transistor (BJT) common emitter amplifier, and the triode preamplifier. The results suggest the use of function approximation to represent the solved system nonlinearity of the K-method and invite future work along these lines.
Article
Full-text available
Replicating musical instruments is a classic problem in computer music. Asystematic collection of instrument designs for each of the main synthesis methods has long been the El Dorado of the computer music community. Here is what James Moorer, the pioneering computer music researcher at Stanford University and later director of the audio project at Lucasfilm, had to say about it (Roads 1982):
Chapter
In memoriam of Jean-Claude Risset’s recent passing, we revisit two of his contributions to sound synthesis, namely waveshaping and feedback modulation synthesis as starting points to develop the connection of a plenthora of oscillatory synthesis methods through iterative phase functions, motivated by the theory of circle maps, which describes any iterated function from the circle to itself. Circle maps have played an important role in developing the theory of dynamical systems with respect to such phenomena as mode-locking, parametric study of stability, and transitions to chaotic regimes. This formulation allows us to bring a wide range of oscillatory methods under one functional description and clarifies their relationship, such as showing that sine circle maps and feedback FM are near-identical synthesis methods.
Conference Paper
Full-text available
Nonlinear systems identification is used to synthesize black-box models of nonlinear audio effects and as such is a widespread topic of interest within the audio industry. As a variety of implementation algorithms provide a myriad of approaches, questions arise whether there are major functional differences between methods and implementations. This paper presents a novel method for the black-box measurement of distortion characteristic curves and an analysis of the popular “lookup table” implementation of nonlinear effects. Pros and cons of the techniques are examined from a signal processing perspective and the basic limitations and efficiencies of the approaches are discussed.
Chapter
Der ASCII-Zeichensatz (American Standard Code for Information Interchange) wird für die Codierung von Texten verwendet. ASCII ist nur für sieben Bits definiert, also von 0 bis 127 ($7F). Die darüberliegenden Sonderzeichen auf der rechten Hälfte der Tabelle sind sehr variabel.
Conference Paper
Full-text available
Non linear techniques are used more and more commonly in musical sound synthesis. Waveshaping allows to produce periodic sounds, whose spectra depend on the input amplitude, but not on the input frequency, as the process is memory-less.This variation can be achieved only acting on the control parameters. When an automatic dependence is required, a non-linear transformation with memory has to be employed. An efficient method to realize it is presented in this work. The output signal is composed by the sum of some polynomial waveshapings, each applied to progressively delayed values of the input sinusoidal. The resulting static and dynamic spectra are deeply analyzed. Designing criteria and three different implementations are discussed both from a theoretical and a practical point of view.
Article
In the digital modeling of analog synthesizer filters using standard digital elements, a saturation element is usually included in the structure. This paper discusses some of the options available for soft clipping saturation elements. It then presents three different filter configurations that include saturation. A review of perceptual techniques for assessing distortion is made to select a suitable evaluation criterion: Timbral warmth being found as the most relevant. Experiments are carried out to apply this to the structures proposed. The conclusion suggests which structure might be most interesting musically.
Chapter
IntroductionSound-feature extractionMapping sound features to control parametersExamples of adaptive DAFXConclusions References
Article
A new approach to the audio synthesis is proposed by defining a kind of computational unit that is simple to implement in hardware and is easily controlled by means of a small set of parameters. The approach is flexible enough to achieve a wide range of different synthesis techniques and also allow dynamic spectra. As a further advantage, the proposed computational unit avoids the use of a digital multiplier and is suitable for future integration on a single integrated circuit. This method can be used for the following audio processing tasks: synthesis of arbitrary time functions; frequency and phase modulation (FM and PM, respectively), formant synthesis, ring modulation, and granular synthesis. The technique used for the realization, is described which revolves around two sinusoidal oscillators whose phases are correlated. Their computation avoids the use of multiplication, making it very efficient. Moreover it shown how proper correlation combines the sinusoidal oscillators of the pair to form complex sounds. Functions are described for controlling the phases of the two oscillators, by using linear interpolation and nonuniformly-spaced breakpoint functions. It also shows how the proposed unit can implement the various synthesis techniques and finally the hardware implementation is described.
Article
Full-text available
Coexistence of different formats for cinema (24 frames/s) and video (25 frames/s) involves speeding up or slowing down the soundtrack when converting from one format to another. This causes a temporal modification of the sound signal, and therefore a spectral modification with a change in timbre. Audiovisual post-production studios have to compensate this effect by an appropriate sound transformation. The aim of this work is to propose to the audiovisual industry a system which allows the counteraction of timbre modification caused by a change in the playback rate. This system consists of a processing algorithm and a machine on which it is implemented. The algorithm is designed to respect sound quality and multichannel compatibility constraints. The machine, named HARMO, is designed for this purpose by the company GENESIS. It is based on digital signal processors and has to respect real-time constraints. The commercial aspect of the project is linked to economic and timing constraints. A state of the art based on a quasi-exhaustive bibliography leads to an original classification of existing time-stretching and pitch-shifting methods. Well-known time-domain and frequency-domain methods are studied, and time-frequency methods are introduced. This classification allows the creation of several innovative methods: . two time-frequency methods using an analysis technique adapted to the human ear, . two coupled methods using advantages of both time- and frequency-domain methods, . a method which proposes an improvement of time-domain methods. Algorithms are evaluated using a bank of test sounds specially designed to highlight characteristic artifacts. The time-domain approach is selected and optimized thanks to criteria based on normalized autocorrelation and detection of transients. This algorithm is integrated into a software designed for multichannel real-time running, and implemented on the HARMO hardware.
Article
A method of sound synthesis, called linear sweep synthesis, is presented that is well known as chirp filtering in many fields such as radar optics, and Fourier Transform theory. The basic idea is explained on an example of a sinusoidal wave, with the process shown in both time domain and frequency domain. Figures are obtained which show that the shapes are very similar with the exception that the spectrum is curved because of the logarithmic plot. The musical application of this process lies in the possibility of designing any spectral shape F(w) and realizing it through an envelope function r(t). Some properties of this method, called linear sweep, or linear sweep FM, are described, the mathematical foundations are furnished, and examples of spectral shaping and Moebius sound given.
Article
The links between mathematics and music are ancient and profound. The numerology of musical intervals is an important part of the theory of music: it has also played a significant scientific role. Musical notation seems to have inspired the use of cartesian coordinates. But the intervention of numbers within the human senses should not be taken for granted. In the Antiquity, while the pythagorician conception viewed harmony as ruled by numbers, Aristoxenus objected that the justification of music was in the ear of the listener rather than in some mathematical reason. Indeed, the mathematical rendering of the score can yield a mechanical and unmusical performance. With the advent of the computer, it has become possible to produce sounds by calculating numbers. In 1957, Max Mathews could record sounds as strings of numbers, and also synthesize musical sounds with the help of a computer calculating numbers specifying sound waves. Beyond composing with sounds, synthesis permits to compose the sound itself, opening new resources for musicians. Digital sound has been popularized by compact discs, synthesizers, samplers, and also by the activity of institutions such as IRCAM. Mathematics is the pervasive tool of this new craft of musical sound, which permits to imitate acoustic instruments; to demonstrate auditory illusions and paradoxes; to create original textures and novel sound material; to set up new situations for real-time musical performance, thanks to the MIDI protocol of numerical description of musical events. However one must remember Aristoxenus’ lesson and take in account the specificities of perception.
Article
In working with standard digital sound synthesis algorithms, methods permit one to derive from a particular timbre clearly related but distinct variants of that timbre. These methods involve the invention of new signal-generating techniques that can yield sounds similar to those already in use, and new signal-processing procedures that permit consistent transformations and alterations of sounds. Some of the methods involve additive synthesis, including one that requires a spectral analysis of a sound and the recomposition (registral redistribution) of its components. What follows is a fairly detailed presentation of these algorithms with enougn 'how to' information to permit interested composers to begin immediately experimenting with them in order to find the kinds of transformations and derivations suitable for their own purposes.
ResearchGate has not been able to resolve any references for this publication.