Fourier Acoustics. Sound Radiation and Nearfield Acoustical Holography
Abstract
Intended for use as both a textbook and a reference, "Fourier Acoustics" develops the theory of sound radiation uniquely from the viewpoint of Fourier Analysis. This powerful perspective of sound radiation provides the reader with a comprehensive and practical understanding which will enable him or her to diagnose and solve sound and vibration problems in the 21st Century. As a result of this perspective, "Fourier Acoustics" is able to present thoroughly and simply, for the first time in book form, the theory of nearfield acoustical holography, an important technique which has revolutionised the measurement of sound. Relying little on material outside the book, "Fourier Acoustics" will be invaluable as a graduate level text as well as a reference for researchers in academia and industry. It talks about the physics of wave propogation and sound vibration in homogeneous media. It deals with acoustics, such as radiation of sound, and radiation from vibrating surfaces; inverse problems, such as the theory of nearfield acoustical holography; and, mathematics of specialized functions, such as spherical harmonics.
Supplementary resource (1)
... Nor can they be used directly as inputs to the NAH algorithms, [2][3][4] because all the methods for NAH implementation are proposed based on the assumption that the sensor array is placed in a source-free region. 1 To realize the visualization of direct radiation from a source in the presence of a reflecting boundary, methods for halfspace acoustic field reconstruction based on different NAH implementations are proposed. By replacing the free-space Green's function with a half-space Green's function in the for-mulation of the equivalent source method (ESM), the reconstruction of the acoustic quantities on the source surface in a half space bounded by an impedance plane is realized. ...
... From Eq. (3), it was found that the incident spherical wave due to a monopole was reflected by the pressure-release boundary with a constant reflection coefficient −1. Since each term of the free-space spherical wave functions represented the acoustic radiation from a multipole or a set of multipoles, 1,5,12 which was constructed from distributions of monopoles, one can formulate the half-space spherical wave functions as ...
... where ψ j (x,x O1 ; ω) and ψ j (x, x O2 ; ω) were the j th term of the free-space spherical wave functions, 11,12 representing the acoustic radiation from a multipole or a set of multipoles located at x O1 , and the corresponding image sources located at x O2 , respectively. The free-space spherical wave functions ψ j were solutions to the Helmholtz equation, expressible in spherical coordinates as 11 ψ j (x; ω) ≡ ψ nl (r, θ, ϕ; ω) = h (1) n (kr) Y l n (θ, ϕ) ; (5) where h (1) n (kr) were the spherical Hankel functions of the first kind, and Y l n (θ, ϕ) were the spherical harmonics. The indices n, l and j in Eq. (5) were related by j = n 2 + n + l + 1, with n ranging from 0 to N and l from −n to n. ...
Near-field acoustical holography is a powerful tool for reconstructing the three-dimensional acoustic field radiated from a vibrating structure located in free space, but it is not applicable when the source is in a half space bounded by a reflecting boundary. This paper develops a method based on half-space spherical wave function expansion for reconstructing the acoustic field radiated directly from a source located near a pressure-release boundary. First, the series of half-space spherical wave basis functions satisfying the pressure-release boundary condition is formulated. Then the acoustic field in a half space is modeled using an expansion in this basis. The expansion coef?cients are determined by solving an overdetermined linear system of equations, obtained by matching this expansion to the measured half-space acoustic pressures. The pressures radiated directly from the source can finally be reconstructed using the free-space spherical wave function expansion with the obtained expansion coefficients. Numerical simulation examples of a vibrating plate located in water near a pressure-release boundary are demonstrated to validate the proposed method. The effects of various parameters, such as the acoustic frequency, the distance between the source and boundary, and the orientation of the source surface, on the reconstruction accuracy are examined.
... Therefore, a spatial sampling finer than the common 'sensor spacing of half a wavelength' rule is required to sample evanescent components while avoiding spatial aliasing. [5,6] In general, sampling sound fields at mid and high frequencies requires the use of dense arrays, which can be problematic. Dense arrays with a small transducer inter-spacing introduce scattering and diffraction effects that cannot be neglected (i.e., they are not acoustically transparent), effectively biasing the measurements. ...
... Acoustic holography makes it possible to fully characterize an acoustic field over an entire 3D domain by measuring the pressure on a single plane. [5] Let us consider the pressure field p(x, y, z) in the half space z > 0 where no sources are present. ...
... The spatial sampling of acoustic fields using point-wise measurements (i.e., microphone arrays) is well understood. [5] As measurements based on the acousto-optic interaction are line integrals of the pressure (as indicated by Eq. 1), different sampling requirements apply. In this study we derive sampling requirements for a uniform scan of the sound field using parallel laser beams. ...
Near-field acoustic holography (NAH) is a very powerful and widely used technique for the study of complex acoustic radiators. NAH enables to quickly understand how a complex source radiates into the medium. The technique is particularly suitable at low frequencies. At high frequencies, a dense transducer interspacing is required, and the measurement microphones can disturb the studied sound field when their size is comparable to the acoustic wavelength. In this study we examine the use of acousto-optic sensing in NAH. Acousto-optic sensing uses light beams as the sensing element, making it possible to acquire remote and non-invasive measurements without introducing extraneous objects in the vicinity of the source. The pressure, particle velocity and intensity fields, as well as the sound power radiated by a complex source, are determined from measurements in the near-field with an optical interferometer. The presented results demonstrate the potential of optical sensing to non-intrusively characterize sound fields, particularly at high frequencies.
... The theory can be expressed in a mathematical model, the Kirchhoff-Helmholtz integral equation, which indicates that the sound field can be synthesised by secondary sources on the boundary. Consider the wave equation in two-dimensional space [46,47] where the sound pressure p(x, t) as a function of position x = (x, y) and time t satisfies ...
... We can obtain the wave equation in the frequency domain by applying a temporal Fourier transform to (2.1). This well-known equation is named the homogeneous Helmholtz equation [46,47]. ...
... where n denotes the unit normal vector. Then, the Green's theorem can be expressed as [46], ...
This study discusses two-dimensional spatial control using loudspeaker arrays. Spatial control is a technique for the spatial processing of sound waves in a confined space. Sound field reproduction, directivity control, and spatial active noise control are
applications of spatial control. In particular, sound field reproduction, which replicates the sound experience in space, is extremely important for 3D audio systems. Further development of this technique would allow 3D television, 3D communication systems, and virtual live concerts to be realised.
A variety of methods on spatial control, using loudspeakers, have been proposed. Basic array shapes, e.g., linear or circular loudspeaker arrays, have been the special focus of these studies. The optimised methods are applied to specific array geometries to maximise performance. However, it is difficult to apply these methods to general audio systems with the performances differing from place to place. In this study, analytical methods controlling complex array geometries are discussed.
The focus is on the acoustical properties of the loudspeaker array, which is the basic component of a spatial control system. This study suggests a novel approach on enhancing the spatial control technique: proposing new array geometries that have analytical solution to diversify the control. Although there are a multitude of array geometries that can be taken into consideration, this study focuses on two array models as the first attempt on the approach. The conventional circular array is modified to consider the two models: the multiple rigid circular arrays model which sets up multiple conventional arrays as one model and the elliptical array model which can be considered a flattened conventional array. Analytical control methods are proposed for both array models, either considered the centre-shifted arrays with multiple scattering effect, or introduced a novel spatial control theory based on elliptical coordinate system. The results reveal the validity of the proposed methods. The proposed arrays outperform conventional arrays in several situations, showing a possibility of the approach on proposing analytical controllable arrays. In addition, the influence of several array features on spatial control is investigated, giving a further prospect on designing a high performance array geometry.
... general solution of the HHE for the pressure inside of Λ is given through [Wil99] ...
... Williams denotes Eq. (2.4) as the solution to the interior domain problem. The exterior problem refers to scenarios in which all sound sources are located inside a cylindrical region V with radius r V around the origin and infinite length (see Figure 2.2), so that the solution [Wil99] ...
... The above solutions of the HHE have applications in the field of compact transducer arrays [TK06,PBA15], underwater acoustics and sonar technology [Wil99,KAH04,KA08]; but they are also used to simplify sound field models for analysis and synthesis applications by reducing the spatial dimensionality to two, describing a sound field that is invariant along the z-axis [Pol00, TK06, AS08, KRWB10, FNW12, AR14]. In that case, the integral in Equations (2.4) and (2.5), respectively, simplifies considerably. ...
This Ph.D. thesis concerns advances in acoustic transducer array technology for improved sound field analysis and control performance. Four principal investigations are presented, which address specific performance limitations of microphone arrays and loudspeaker arrays. The basic model, on which these investigations are founded, is the general solution of the Helmholtz equation in cylindrical coordinates. The individual acoustical investigations draw on the analysis of the eigenvalues and eigenfunctions of the forward operator, providing information on the robustness of the inverse solutions against non-uniqueness, ill-conditioning and spatial aliasing. A circular microphone array design based on tangentially aligned pressure gradient sensors is studied. The theoretical analysis is complemented by a simulation study, comparing the new design to conventional arrays built from pressure sensors. It is shown that the proposed design can provide an improved performance at low frequencies, while performing worse at high frequencies due to spatial aliasing. The effects of the latter can be compensated if the Direction-of-Arrival (DOA) of the incoming waves is known. A novel DOA estimation method for sound fields measured with circular microphone arrays is proposed to address this. Using analytical expressions to model the sound fields of point sources and plane waves, it is studied for which sound fields the method is applicable and how robust it is against model imperfections. The estimation accuracy for different numbers of sources and different levels of background noise is investigated in a simulation study and the method is tested against real data, obtained through acoustic measurements. The estimation results achieved in simulations and with experimental data compare well. The general solution to the Helmholtz equation is then applied as a model for acoustic radiation in wedge-shaped spaces. This investigation aims to improve the performance of loudspeaker arrays in restricted propagation spaces, e.g. rooms. By introducing boundary conditions to the general model, different sets of basis functions are implemented in the solution and it is shown that the model enables Nearfield Acoustical Holography (NAH). Using the same propagation model, a technique for sound field control with arrays in wedge spaces is developed. The inverse problem is solved by means of a mode-matching approach, leading to an expression for the driving signals ased on a target beam pattern. Both simulations and experiments with a hemi-cylindrical loudspeaker array prototype confirm the applicability of the model for both NAH and beamforming with loudspeaker arrays in wedge spaces. Different beam patterns are considered and the model is tested through simulations and experiments. The implications of the findings, how they are linked and what future developments they may lead to is discussed.
... Unless specified, we adopt the spherical conventions used in [Wil99] where θ is measured from the polar axis z DRIRs are typically measured with 2-D or 3-D microphone arrays and aim at preserving the spatial properties of the room impulse response. In this work, we'll focus on Spherical Microphone Arrays (SMA). ...
... The spherical harmonics (i.e. the angular solutions of the Helmholtz wave equation), are defined as [Wil99] Y m n (Ω) = ...
... The associated Legendre function P m n (cosθ) represents standing spherical waves in θ, the term e imφ represents travelling spherical waves in φ [Raf04]. You can find a more detailed explanation of SHT in [Wil99]. ...
For several decades, reverberation processors were mainly based on feedback delay networks (FDN) that
provide an efficient way to control the distribution of early reflections and the statistical properties of room
reverberation. Recent research has shown an increasing interest in convolutive reverberation processing,
which allows for reproducing the acoustics of a venue or concert hall in recordings.
The spatial acoustic information is typically captured with microphone arrays and extracted into Directional
Room Impulse Responses (DRIRs). Convolving monophonic audio signals with DRIRs allows for creating
playback signals distinct by a certain acoustic impression. Playback signals can be processed for different
professional and consumer audio formats, such as 5.1 Surround, Vector Base Amplitude Panning (VBAP),
Higher-Order Ambisonics (HOA) or headphone-based binaural synthesis using Head-Related Transfer Functions
(HRTFs). Some examples can be found in [Dur05] and [AAG+13]).
Using optimized algorithms the computational load can be limited so that rendering the signals in real time
becomes feasible. In previous research IRCAM’s EAC research group has developed perceptual control algorithms
for convolution based room simulators, which have been derived from the Spat˜ signal processing
model (see [CSNW13], [Jot99]).
In this work the spherical wave spectral representation and processing of DRIRs measured with spherical
microphone arrays is described. Further, we present a novel method for denoising measured DRIRs, which
is based on [CSNW13]. Computational optimizations for real-time implementations will be discussed.
... Following [Wil99], the upper frequency limit for spatial aliasing f u,alias decreases with increasing array radius a (overall depending on the maximum order of modal decomposition N ) according to the relation N > ka, ...
... Therefore it needs to be solved for the geometry. The Helmholtz equation writes as [Wil99,Zot09]: ...
... The Helmholtz equation (4.3) is solved using a product ansatz [Wil99] ...
This master thesis describes the development and construction of a novel loudspeaker
array that creates highly directional sound radiation on a spherical section. This section is
flanked by sound hard boundaries which confine radiation to the spherical section. Limiting
possible radiation to a spherical section strongly increases the spatial resolution, while
the number of loudspeakers is the same as with a full-sphere array. The beam pattern is
composed of harmonic functions especially derived for this spherical section. A suitable
combination of these harmonics results in beamforming. The ambition is to aim these
sound beams against the wall to excite directed reflection paths. In analogy to video
projection the point of acoustic impact should be perceived as the origin of a spherical wave.
For this purpose, wall reflection must preferably be diffuse. The finite-length boundary’s
influence on beam steering will be investigated. Is it possible to use this sonic projection
instrument as a means of 2D spatialisation if the wall reflection is only diffuse?
... Spherical harmonic representations are commonly used in spatial sound field capture, processing, and reproduction [1,2,3,4]. The angular/directional dependency of the individual modes is described by the spherical harmonics or the Legendre functions whereas the radial dependency by the spherical Bessel/Hankel functions. ...
... Let us assume a plane wave propagating in the direction n pw = (1, θ pw , φ pw ) with θ pw and φ pw respectively denoting the colatitude and azimuth angle. The spherical harmonic expansion of the plane wave reads [1,Eq. (6.175)] e −i ω c r cos Θ = ∞ n=0 (2n + 1)i −n j n ( ω c r)P n (cos Θ) (1) in the frequency domain 1 , and [11,Eq. ...
... The spherical harmonic expansion of the plane wave reads [1,Eq. (6.175)] e −i ω c r cos Θ = ∞ n=0 (2n + 1)i −n j n ( ω c r)P n (cos Θ) (1) in the frequency domain 1 , and [11,Eq. (11)] ...
The spatial encoding, manipulation and decoding of a sound field in the spherical harmonics domain requires a discrete-time realization of the radial dependent parts, called radial filters. For plane waves, the spectra of the radial basis functions are described by the spherical Bessel functions and the impulse responses by the Legendre polynomials. Although the radial filters can be designed efficiently by sampling the time-domain radial functions, the resulting spectrum typically suffers from aliasing. In this paper, we present a radial filter design method where the aliasing is reduced by means of spectral pre-emphasis. The antiderivatives of the radial functions are derived in closed form by exploiting the Rodrigues’ formula. This leads to a spectral shaping of the radial functions that is inversely proportional to the frequency. Since the energy lying beyond the Nyquist limit is attenuated, the pre-emphasized signal can be sampled with reduced aliasing. The original magnitude spectrum is then restored by applying a differentiation to the sampled signal. In this study, a first-order IIR filter is used as a digital differentiator. The aliasing reduction achieved by the proposed method is demonstrated for different radii and antiderivative orders.
... and by the "EU Union NextGenera-tionEU/PRTR". The authors acknowledge also the Artemisa computer resources funded by the EU ERDF and Comunitat Valenciana, and the technical support of IFIC (CSIC-UV). [12,13] in order to exploit their characteristics in different applications [14][15][16][17] including source localization [18][19][20]. In this context, the sound field decomposition in terms of spherical harmonics (SH) has been widely adopted since it enables the decoupling of frequency-dependent and direction-dependent components of the acoustic field. ...
... The measured sound pressure P corresponding to a continuous sound field on a sphere of radius R can be decomposed using the SH basis functions as [13] ...
Acoustic signal processing in the spherical harmonics domain (SHD) is an active research area that exploits the signals acquired by higher order microphone arrays. A very important task is that concerning the localization of active sound sources. In this paper, we propose a simple yet effective method to localize prominent acoustic sources in adverse acoustic scenarios. By using a proper normalization and arrangement of the estimated spherical harmonic coefficients, we exploit low-rank approximations to estimate the far field modal directional pattern of the dominant source at each time-frame. The experiments confirm the validity of the proposed approach, with superior performance compared to other recent SHD-based approaches.
... A well-known special case for the Kirchhoff-Helmholtz integral is the Rayleigh integral. It is derived for an infinite half space bounded by a plane and is often called in its first form or second form, depending on whether the air particle velocity or the pressure is assumed null at the boundary respectively, or, in other words, if the sources are assumed to be monopolar or dipolar (Williams, 1999). ...
... Higher order Ambisonics While the potential of WFS is enormous, WFS audio scenes format is strongly dependent on the array geometry and hence difficult to map onto another array, or to store it efficiently. Another approach used for representing an incident sound field called higher order Ambisonics (HOA), was proposed by Gerzon (1975), and has become extremely popular in the last few decades (Williams, 1999;Zotter and Frank, 2019). Here, the Helmholtz equation is solved in a spherical coordinate system. ...
This thesis takes place within the RASPUTIN project and focuses on the development, evaluation and use of immersive acoustic virtual reality simulation tools for the purpose of helping blind individuals prepare in-situ navigations in unfamiliar reverberant environments. While several assistive tools, such as sensory substitution devices, can provide spatial information during navigation, an alternative approach is to devise a real-time room acoustic simulation and auralization engine for use by blind individuals at home to enable them to virtually navigate in unfamiliar environments under controlled circumstances, hence building mental representations of these spaces prior to in-situ navigation. In this thesis, I tackle three aspects of this subject. The first part focuses on efficient simulations and auralizations of coupled volumes, which occur in many buildings of interest for navigation preparation (e.g. city halls, hospitals, or museums) and whose simulation and auralization can be challenging. The second part focuses on the individualization of head related transfer functions, which is a necessary step in providing individualized and convincing auditory experiences. Finally, the last part investigates some aspects of the space cognition following use of different learning paradigms, such as tactile maps.
... This approach has clear advantage especially for the near field measurement because the dimension of microphone is much smaller than sound sources in general, therefore the scattering effect can be significantly suppressed. This near-field measurement also makes it possible to apply the acoustical holography [11][12][13] and the source model can be reconstructed. By using this reconstructed source model, the HRTF farer than the measurement plane (hologram plane) can be obtained. ...
... The microphone array was constructed with 4 cm spacing in the vertical direction and the measurement was conducted by turning the dummy head with 5 o step. The transfer matrix between the equivalent sources and measurement positions are estimated by the spherical radiation function [11,13]. Figure 3(a) shows the system to measure the near-field radiation from the dummy-head source system at the measurement point the nearest to the ear. ...
Various approaches based on direct measurement have been used to obtain the precise head-related transfer function (HRTF). However, several problems persist, such as scattering of sound source and distance effect, which are related to the measurement configuration. In this work, the modified dummy head having two sound sources located at the ear positions is applied as the sound source reciprocally to overcome the trouble. By using the near-field acoustic holography, the sound sources located in the ear holes are reconstructed from the measured data in the near-field hologram. One can predict the acoustic transfer function between the ear holes and the field positions at any point using the recovered source information described in spherical harmonics. The obtained transfer function is valid for predicting the binaural response from any sound source position. This configuration is possible to measure the near-field response with the relatively small scattering effect because of the low Helmholtz number of microphones. The far-field reconstruction of HRTF is also possible with the near-field measurement data using acoustical holography. The proposed method is demonstrated with the modified B&K HATS system, in which the microphones in the ear holes are replaced by pipe sources connected to the loudspeaker.
... In other words, the field pressure is expanded into plane waves, and the reconstruction procedure is to obtain coefficients of the plane waves based on measured pressure. Although different from the k-space decomposition, concept of Fourier transformation was inherent to the 3D cylindrical and spherical NAH problems as the in-depth discussions in Ref. [4]. ...
... Most of the methods explicitly require the transfer operator T y, x ÀÁ between desired acoustic quantities f y ÀÁ and measured physical quantities p x ðÞ . They built a linear system of f y ÀÁ ¼ inv T y, x ÀÁ ÀÁ p x ðÞ in which inv ⋆ ðÞ represents an inverse operator, by either a general numerical method (BEM-based NAH) [8][9][10][11][12][13], or specific basis spaces such as a general Fourier basis (Fourier-based NAH) [1][2][3][4], simplified monopoles, dipoles (ESM and WSA) [14-16, 18, 20, 21], and fundamental solutions (HELS) [24][25][26][27]. The reconstruction procedure is therefore to solve the linear system to obtain the physical quantities on the boundary, such as pressure or normal velocity in BEM-based NAH, the source strength of equivalent source in ESM, coefficients of basis functions in Fourier-based NAH and HELS, and following by an extrapolation process to achieve desired acoustic quantities. ...
A mapping relationship-based near-field acoustic holography (MRS-based NAH) is a kind of innovative NAH by exploring the mapping relationship between modes on surfaces of the boundary and hologram. Thus, reconstruction is converted to obtain the coefficients of participant modes on holograms. The MRS-based NAH supplies an analytical method to determine the number of adopted fundamental solution (FS) as well as a technique to approximate a specific degree of mode on patches by a set of locally orthogonal patterns explored for three widely used holograms, such as planar, cylindrical, and spherical holograms. The NAH framework provides a new insight to the reconstruction procedure based on the FS in spherical coordinates. Reconstruction accuracy based on two types of errors, the truncation errors due to the limited number of participant modes and the inevitable measurement errors caused by uncertainties in the experiment, are available in the NAH. An approach is developed to estimate the lower and upper bounds of the relative error. It supplies a tool to predict the error for a reconstruction under the condition that the truncation error ratio and the signal-to-noise ratio are given. The condition number of the inverse operator is investigated to measure the sensitivity of the reconstruction to the input errors.
... where k = 2π f /c is the wavenumber, c is the wave propagation speed, f is the frequency, m is the order, M is the highest order, exp is the exponential function, andS is the circular wave spectrum. The circular wave spectrum can be expressed by the circular harmonics coefficient and the Bessel function [22]. To avoid spatial aliasing owing to the discretization of the continuous array in Figure 2, the angular bandwidth (M = N) and the number of microphones and loudspeakers can be determined via Equations (1) and (4). ...
... The sound pressure of the desired wave field can be matched only within a circle with a radius of r re f (reference circle) owing to the dimensional mismatch that occurs because the 3D wave field is expressed as a 2D array. The relationship between the circular harmonics coefficients of r re f and those of R mic can be expressed as [22], ...
In this paper, we propose horizontal active noise control (ANC) using two-dimensional wave field information alone. By reducing the control space to a horizontal plane, the number of microphones and speakers was considerably reduced compared with ANC systems using three-dimensional wave field information. The radii of the reference, microphone, and loudspeaker array were determined based on the wave field reproduction error. Accordingly, the simulation and experimental results of the proposed ANC system were presented based on the use of five microphones and loudspeakers using conventional ANC algorithms. Overall, an average noise reduction of 20 dB was observed inside the microphone array with a radius of 0.5 m for tonal noise at 200 Hz. This performance is acceptable with a drastically reduced number of microphones and speakers. The findings of this study, along with further research conducted in a reverberant room, represent a significant contribution to global ANC commercialization.
... For SMAs, analytical descriptions of the geometry and sensor directivities may be used to derive E, and more information can be found in e.g. [6], [45]- [47]. However, for irregular geometries, such as the array employed for this present study, a general approach is required. ...
... As an additional control condition, a tetrahedral array of cardioidpattern sensors with a radius of 2 cm, as commonly employed for ambisonic recording in practice, was also used to obtain simulated recordings and encoded into fifth-order SH using the proposed method (tetra par o5). Note that this tetrahedral array was simulated based on analytical descriptors [45], [46] for the same V = 841 directions, in order to have parity with the grid used to simulate the array in question. This condition was intended to reveal any improvements of the proposed method when using an array type that is commercially and widely available, and often employed for capturing firstorder linearly encoded recordings (tetra lin o1). ...
This article proposes a parametric signal-dependent method for the task of encoding microphone array signals into Ambisonic signals. The proposed method is presented and evaluated in the context of encoding a simulated seven-sensor microphone array, which is mounted on an augmented reality headset device. Given the inherent flexibility of the Ambisonics format, and its popularity within the context of such devices, this array configuration represents a potential future use case for Ambisonic recording. However, due to its irregular geometry and non-uniform sensor placement, conventional signal-independent Ambisonic encoding is particularly limited. The primary aims of the proposed method are to obtain Ambisonic signals over a wider frequency band-width, and at a higher spatial resolution, than would otherwise be possible through conventional signal-independent encoding. The proposed method is based on a multi-source sound-field model and employs spatial filtering to divide the captured sound-field into its individual source and directional ambient components, which are subsequently encoded into the Ambisonics format at an arbitrary order. It is demonstrated through both objective and perceptual evaluations that the proposed parametric method outperforms conventional signal-independent encoding in the majority of cases.
... Expressing the Helmholtz equation in spherical coordinates and separating its variables by means of a product ansatz results in three different ordinary differential equations, with general solutions for the radial, and the two angular components (Williams, 1999;Zotter, 2009a). The angular solutions in and can be compactly summarised by a set of real-valued orthonormal basis functions. ...
... The angular solutions in and can be compactly summarised by a set of real-valued orthonormal basis functions. These spherical harmonics (SHs) of order n and degree m are defined as (Meyer & Elko, 2016) (n − |m|)! (n + |m|)! · n,|m| (cos( ))· ⎧ ⎪ ⎨ ⎪ ⎩ √ 2 sin(|m| ), for m < 0, 1, for m = 0, √ 2 cos(|m| ), for m > 0, (3.14) normalised as per N3D normalisation, with (·)! denoting the factorial, and n,|m| representing the associated Legendre functions of the first kind, without Condon-Shortley phase (Williams, 1999;Abramovitz & Stegun, 1964). ...
Hearing loss (HL) has multifaceted negative consequences for individuals of all age groups. Despite individual fitting based on clinical assessment, consequent usage of hearing aids (HAs) as a remedy is often discouraged due to unsatisfactory HA performance. Consequently, the methodological complexity in the development of HA algorithms has been increased by employing virtual acoustic environments which enable the simulation of indoor scenarios with plausible room acoustics. Inspired by the research question of how to make such environments accessible to HA users while maintaining complete signal control, a novel concept addressing combined perception via HAs and residual hearing is proposed. The specific system implementations employ a master HA and research HAs for aided signal provision, and loudspeaker-based spatial audio methods for external sound field reproduction. Systematic objective evaluations led to recommendations of configurations for reliable system operation, accounting for perceptual aspects. The results from perceptual evaluations involving adults with normal hearing revealed that the characteristics of the used research HAs primarily affect sound localisation performance, while allowing comparable egocentric auditory distance estimates as observed when using loudspeaker-based reproduction. To demonstrate the applicability of the system, school-age children with HL fitted with research HAs were tested for speech-in-noise perception in a virtual classroom and achieved comparable speech reception thresholds as a comparison group using commercial HAs, which supports the validity of the HA simulation. The inability to perform spatial unmasking of speech compared to their peers with normal hearing implies that reverberation times of 0.4 s already have extensive disruptive effects on spatial processing in children with HL. Collectively, the results from evaluation and application indicate that the proposed systems satisfy core criteria towards their use in HA research.
... The pressure at the point (x, y, z), whose position to the origin is characterized by the dierence between vector r and the vector r is noted p(x, y, z). with c 0 the acoustic velocity at ambient temperature and k 0 the wavenumber. Considering the far eld condition, such as |r − r | >> √ S d , using k 0 c 0 = ω and considering a rigid surface, the Rayleigh integral reduces to [106] p(r) = −jωρ 0 vS d k 0 e jk0r 2πr (2.26) with r = |r − r |, the distance between the source and the evaluation point. This makes the acceleration α = jωv appears and conrms that the radiated acoustic pressure is proportional to the acceleration, as described in Section 1.2. ...
In pursuance of making loudspeakers compatible with very large scale integration (VLSI) processes, research has been carried out on the development of MEMS loudspeaker, from nearly as early as the first MEMS device and recently arouses a gain of interest due to the thinning of portable multimedia devices. Despite interesting performances for in-ear loudspeaker, where high resonance frequencies and small displacements can generate significant sound pressure level, no solution was found to advantageously replace non-MEMS loudspeakers for free field applications. To widen the frequency range of MEMS loudspeakers, an innovative geometry based on an assembly of two wafers is proposed, in order to separate the electro-mechanical transducer from the mechano-acoustics one, with the aim of reaching a better optimization target. In this thesis, the working principle of the loudspeaker, as well as the models used for the design and the optimization are presented and detailed. The innovative custom manufacturing process with which the loudspeaker is manufactured in the CEA Leti clean rooms is then detailed, and the measured frequency response is compared with the simulated ones. The loudspeaker is able to radiate pressures as high as 110 dBSPL at 1 kHz, the resonance frequency of the loudspeaker, which is above the state of the art for MEMS loudspeakers of such dimensions. By using digital signal processing, as well as a dedicated packaging with specific acoustical couplings, this loudspeaker could replace the secondary loudspeaker of smartphones.
... The total pressure , , at a position , on the sphere radius by all plane waves can be calculated by Equation (1) (Williams, 1999;Rafaely, 2004;Nolan et al. 2018), using the angular spectrum , , which is composed of complex numbers and the magnitude of the angular spectrum | , , | represents that of the sound waves at the array arriving from an arbitrary direction , , imaginary number √ 1, exponential term which states a plane wave with wavenumber vector and microphone position vector and the spherical integral Ω : ...
Spacecraft are exposed to acoustic loads during flights to space by launch vehicles. The endurance of spacecraft structure is verified by a ground acoustic test specified in test standard documents of the aerospace community. These standard documents require the execution of ground acoustic tests in a reverberant chamber on the assumption that it can generate a diffuse sound field which is normally defined as “completely isotropic and equal probability of energy flow in all directions”. However, in the space development industry and related studies, the directions of incident sounds which cause non-diffuseness has not been identified by the experimental comparison with the theoretical values of an ideal diffuse sound field (e.g. reverberation times, homogeneity of sound pressures and spatial correlation functions). Therefore, the conventional standard documents do not clearly specify the quantitative requirements regarding the directional properties of sound fields inside reverberant chambers. To specify the quantitative requirement on the directional properties of sounds, this paper focused on calculating the degree of isotropy of a sound field by the direct measurement of directional properties of sound waves as angular spectra. In this paper, the method to obtain the angular spectrum based on the theory of the expansion of the plane wave into spherical harmonics by using a spherical microphone array was firstly combined with the existing formula of the isotropy indicator, which allowed us to determine the frequency range when we apply the combined method to sound fields. The method to determine the applicable frequency range was demonstrated and the process of obtaining the angular spectrum and isotropy indicator was verified by numerical simulation. Finally, the combined method was applied to the sound field in the reverberant chamber of JAXA 1600m³ acoustic test facility.
... In addition, if the test panel is not homogeneous and contains internal structure, even the normal incidence results obtained a hydrophone placed close to test panel may give different results depending on the hydrophone position. These latter issues (outlined in (iii) and (iv) above), can be addressed by nearfield scanning using a hydrophone to sample the complex sound pressure field interacting with the test sample, and then decomposing the sound field into its plane-wave components [Humphrey 1986, Williams, 1999. Results are presented here showing these techniques applied to measurements in laboratory test tanks to determine the reflection and transmission performance of a test sample with regular periodic structure. ...
The properties of the materials used in underwater acoustics are important for applications such as acoustic windows, reflectors and baffles, acoustic barriers or screens, decoupling materials, and anechoic coatings. To characterise the performance of such materials at frequencies above 1 kHz, measurements are typically undertaken on samples of the material in the form of finite sized panels. Such measurements suffer from uncertainty due to the finite size of the panel (leading to contaminating signals from edge diffraction), and the difficulty in simulating the ideal plane-wave insonification. This paper describes work at the UK National Physical Laboratory to minimise these effects by use of: (i) a parametric array as a sound source that provides a directional beam and short broadband pulses; and (ii) nearfield scanning using a hydrophone to sample the complex sound pressure field interacting with the test sample, decomposing the sound field into its plane-wave components. Results are presented of these techniques applied to measurements in laboratory test tanks at frequencies between a few kilohertz and a few hundred kilohertz to determine the reflection and transmission performance of a range of test samples, including panels consisting of homogeneous polymers and materials with regular periodic structure.
... In the angular spectrum method, a plane wave decomposition of a source plane (x, y) is used [139,140]: ...
Acoustical tweezers based on focused acoustical vortices open some tremendous perspectives for the in vitro and in vivo remote and selective manipulation of millimetric down to micrometric objects, with combined selectivity and applied forces out of reach with any other contactless manipulation technique. The first demonstration of 3D particle trapping and manipulation with acoustical vortices was achieved in 2016 with an array of transducers driven by programmable electronics. More recently it has been proposed to use holographic acoustical tweezers based on Archimedes-Fermat spiraling interdigitated transducers (S-IDTs) to design miniaturized acoustical tweezers compatible with a standard microscopy environment. In this PhD, we have explored the possibilities offered by these kinds of acoustical tweezers to address the following unsolved issues: 1) Manipulate selectively and organize human cells with large forces (200pN) without pre-tagging and without affecting the cells viability. 2) Create ultra-high frequency tweezers (250 MHz) with high spatial selectivity able to trap and position 4 microns individual microparticles with NanoNewton forces. 3) Manipulate microparticles in 3D in a free environment and translate them axially without motion of the transducer. These goals have been achieved by developing (i) a new numerical code based the combination of Finite Element simulation of the source and Angular Spectrum propagation of the wave and (ii) appropriate microfabrication procedures, which helped us design and fabricate tweezers with the good capabilities. This work open perspectives in microbiology to study cells interaction and their response to mechanical solicitation but also for acoustic forces spectroscopy.
... Well studied in the literature [14,19], the Spherical Related Transfer Function (SRTF) stands for an analytical inclusion of diffraction for the specific case of a monopole scattered by a rigid sphere. recommandation [13] spherical harmonics are truncated at the 40 th order. ...
As an inverse problem, sound source localization in three dimensions relies on two distinct cornerstones. One is the physical model chosen to describe the acoustic propagation of the sources to identify and the other is the algorithmic process used to derive information from measured acoustic data. Mainly focusing on the first point, an Equivalent Source Method (ESM) aiming at the simulation of realistic Frequency Response Functions (FRF) is proposed in this paper. The underlying idea is to substitute the acoustic behaviour of a radiating object by a set of acoustic monopoles calibrated with respect to the boundary condition on its skin. Such a method allows to perform 3D Conventional Beamforming (CBF) with FRF taking into account the acoustic environment and the influence structure. Misleading sound source localiza-tion outcomes due to ground reflections or diffraction are therefore prevented. As a first step, the ESM process is validated thanks to the Spherical Related Transfer Function which provides a rigorous analytical framework for FRF comparison. ESM boils down to an inverse problem in itself upstream to CBF, and various ways of solving it are assessed. With a view to present an industrial application, FRF are computed on a car mesh to carry out 3D CBF with the experimental pressure scattered by an omnidirectional source placed near the rear-view mirror, measured by a 160 microphones top array and two 100 microphones side arrays in the Daimler automotive wind tunnel. Finally, a strategy to include the contribution of wind tunnel convective effects at low Mach number is investigated. To this end, a geometric routine based on Amiet's model is coupled with the ESM boundary condition step and assessed on wind tunnel measurements.
... The sound pressure ref located at the reference position due to a continuous radiating surface can be derived from the Kirchhoff-Helmholtz Integral Equation using Green's functions [12] ...
Increasing pressure from the consumer market challenges automotive OEMs to meet customer expectations for NVH performance in ever more efficient ways. The rapid development pace of new vehicles results in short development cycles, where vehicle testing is kept to a minimum and preferably focused on individual sub-components. Complex elements, such as electric powertrains, are often designed to be compatible with multiple vehicle models, so as to maximize their usage while effectively reducing their development cost. As a result, a thorough characterization and understanding of these primary components are key to preventing potential issues. From an acoustic perspective, this scenario can be simplified by characterizing the vibro-acoustic emission of the source and the propagation paths toward the car passengers separately. A detailed characterization of the sound field around the engine will not only help predict the sound pressure perceived inside the cabin but also understand the impact of individual elements on the noise emitted. In this paper, we present a novel technique using multiple 3D sound intensity probes and a three-dimensional tracking system to measure the sound radiated by an electric powertrain during Wide-Open Throttle (WOT) on a dyno test cell. The 3D radiation data is then combined with acoustic transfer functions measured reciprocally on a full vehicle. This approach enables the synthesis of the individual contributions of the Electric Drive Unit (EDU), junction box, and inverter for a particular vehicle model but it can also be extended to other vehicle implementations just by measuring a new set of acoustic transfer functions. Experimental results are compared with full-vehicle measurements performed on a roller test bench. Results demonstrate the effectiveness of the proposed approach, enabling the prediction of key acoustic features induced by the powertrain inside the cabin while identifying the main areas responsible for the noise emission on critical excitation bands. Furthermore, the usage of order and frequency filtering on the 3D sound visualization maps was proven to be very useful for troubleshooting purposes. In conclusion, the proposed methodology can be used to improve the refinement process of an electric powertrain in an easy and intuitive way, enabling it to identify areas of high noise radiation and predict potential problems of an electric powertrain mounted into different vehicle implementations.
... Ultrasound imaging systems with phased array transducers traditionally rely on focusing and apodization strategies to achieve high spatial resolution in the transverse direction [6,7] and in this context a key consideration is the beamwidth as a function of depth, along with the minimization of sidelobes [8,9]. A useful concept in designing beampatterns is the Fourier transform relation that applies to the field on a source plane and at some farfield or focal depth [10,11]. This, combined with related approximations, enables the specification and selection of beams from practical systems [12]. ...
The free space solution to the wave equation in spherical coordinates is well known as a separable product of functions. Re-examination of these functions, particularly the sums of spherical Bessel and harmonic functions, reveals behaviors which can produce a range of useful beampatterns from radially symmetric sources. These functions can be modified by several key parameters which can be adjusted to produce a wide-ranging family of beampatterns, from the axicon Bessel beam to a variety of unique axial and lateral forms. We demonstrate that several special properties of the simple sum over integer orders of spherical Bessel functions, and then the sum of their product with spherical harmonic functions specifying the free space solution, lead to a family of useful beampatterns and a unique framework for designing them. Examples from a simulation of a pure tone 5 MHz ultrasound configuration demonstrate strong central axis concentration, and the ability to modulate or localize the axial intensity with simple adjustment of the integer orders and other key parameters related to the weights and arguments of the spherical Bessel functions.
... S OURCE localization of speech signals using Spherical Microphone Array (SMA) has wide range of applications such as speaker separation, speech enhancement, ambisonics, videoconferencing etc [1]- [8]. SMA is used for Direction of Arrival (DoA) estimation applications as there is no spatial ambiguity in estimating azimuth and elevation angles in the Spherical Harmonic (SH) domain [3]- [5]. ...
p>Wideband acoustic source localization is a challenging task especially in the presence of noise. Generally wideband source localization is done by averaging the Direction of Arrival (DOA) estimates obtained over multiple frequencies or narrow subbands by the method of frequency smoothing. Localization of multiple wideband sources which are correlated is even more challenging. In this work acoustic source localization under these challenging conditions is addressed. A sparse reconstruction framework is developed for wideband source localization in the Spherical Harmonics (SH) domain. The proposed framework jointly computes both the azimuth and elevation. In contrast to earlier methods of source localization in SH domain this work utilizes the expression for the SH coefficients of the amplitude density of the plane wave, to develop the sparse reconstruction framework. An expression for Cramer Rao Lower Bound (CRLB) on the DOA using this framework is also derived for the wideband sources. This CRLB is shown to easily generalize for narrowband sources as well. The performance of the proposed method is compared with MUltiple Signal Classification in SH domain (MUSIC-SH) and Steered Response Power in SH domain (SRP-SH). The performance is evaluated for correlated narrowband and wideband sources through simulations, conducting experiments in an anechoic chamber and under reverberant conditions. Using the proposed framework, it is shown that correlated narrowband and wideband sources can be resolved reasonably well when compared to conventional methods of acoustic source localization in SH domain.</p
... S OURCE localization of speech signals using Spherical Microphone Array (SMA) has wide range of applications such as speaker separation, speech enhancement, ambisonics, videoconferencing etc [1]- [8]. SMA is used for Direction of Arrival (DoA) estimation applications as there is no spatial ambiguity in estimating azimuth and elevation angles in the Spherical Harmonic (SH) domain [3]- [5]. ...
p>Wideband acoustic source localization is a challenging task especially in the presence of noise. Generally wideband source localization is done by averaging the Direction of Arrival (DOA) estimates obtained over multiple frequencies or narrow subbands by the method of frequency smoothing. Localization of multiple wideband sources which are correlated is even more challenging. In this work acoustic source localization under these challenging conditions is addressed. A sparse reconstruction framework is developed for wideband source localization in the Spherical Harmonics (SH) domain. The proposed framework jointly computes both the azimuth and elevation. In contrast to earlier methods of source localization in SH domain this work utilizes the expression for the SH coefficients of the amplitude density of the plane wave, to develop the sparse reconstruction framework. An expression for Cramer Rao Lower Bound (CRLB) on the DOA using this framework is also derived for the wideband sources. This CRLB is shown to easily generalize for narrowband sources as well. The performance of the proposed method is compared with MUltiple Signal Classification in SH domain (MUSIC-SH) and Steered Response Power in SH domain (SRP-SH). The performance is evaluated for correlated narrowband and wideband sources through simulations, conducting experiments in an anechoic chamber and under reverberant conditions. Using the proposed framework, it is shown that correlated narrowband and wideband sources can be resolved reasonably well when compared to conventional methods of acoustic source localization in SH domain.</p
... where p t is the total sound pressure, p i is the incident sound pressure and correspondingly, p s is the scattered component due to the sphere's presence. These are further expanded by the spherical harmonics series as described by [14], [15] ...
Spherical microphone arrays enable the directional decomposition of a sound field over all propagation angles and are therefore a valuable analysis tool in enclosures. At the same time, deep neural networks have shown promising performance in acoustic source localisation tasks, especially in challenging acoustic scenarios. Combining neural network source localisation with spherical array signals can prove extremely beneficial in existent applications. However, classical neural networks typically exploit one-or two-dimensional data correlations. This can cause the network estimation to be prone to errors in wave propagation direction and limiting the array to specific directions, since the network does not guarantee rotational invariance. In this study, we examine a spherical graph neural network architecture for direction of arrival estimation trained on both simulated and measured sound field features. The spherical graph neural network can capture spherical correlations and can warrant that they are held approximately invariant for rotations. The networks' performance is preliminarily investigated under anechoic conditions with the respective measured spherical array impulse responses.
... Initially, pure-tone sound fields of frequency ω are considered. Assuming that no sound sources are inside Ω, arbitrary acoustic fields can be expressed as a linear combination of propagating and evanescent plane waves [29]. In this study only propagating waves are included in the expansion. ...
Sound field reconstruction aims at estimating acoustic fields over space from a limited number of measurements, typically in connection to the analysis, control, and reproduction of the measured fields. While the existing literature has focused on developing suitable acoustic models and estimation algorithms, less attention has been paid to the positioning of measurements. The present study investigates the optimal sensor placement for reconstructing sound fields over space. The reconstruction problem involves estimating a set of parameters used to describe the sound field. In this study, an optimization problem is formulated, in which the sensor positions that minimize the Bayesian Cramer-Rao bounds for the estimated parameters are chosen from a collection of candidates, while a constraint on the number of active sensors is imposed. An approximate solution to this problem is found via convex relaxation. Numerical results show that tight bounds can be obtained, thus the optimized positions can effectively reduce the mean-squared-error of the estimated parameters. Additionally, a bound on the best achievable performance by a selection of sensors can be calculated, making it possible to assess the optimality of the approximate, convex-relaxed solution. Experimental results indicate lower errors when the positions are optimized as compared to other selection methods.
... Another strategy to approximately solve Eq. (2) is to represent the sound field by spherical wavefunction expansion [23,14], which is referred to as mode matching [17]. The driving signal is obtained so that the expansion coefficients of the synthesized sound field coincide with those of the desired sound field. ...
A sound field reproduction method called weighted pressure matching is proposed. Sound field reproduction is aimed at synthesizing the desired sound field using multiple loudspeakers inside a target region. Optimization-based methods are derived from the minimization of errors between synthesized and desired sound fields, which enable the use of an arbitrary array geometry in contrast with integral-equation-based methods. Pressure matching is widely used in the optimization-based sound field reproduction methods because of its simplicity of implementation. Its cost function is defined as the synthesis errors at multiple control points inside the target region; then, the driving signals of the loudspeakers are obtained by solving a least-squares problem. However, in pressure matching, the region between the control points is not taken into consideration. We define the cost function as the regional integration of the synthesis error over the target region. On the basis of the kernel interpolation of the sound field, this cost function is represented as the weighted square error of the synthesized pressures at the control points. Experimental results indicate that the proposed weighted pressure matching outperforms conventional pressure matching.
... Bruel and Kjael commercialized the Fourier NAH with a new name, spatial transformation of sound field (STSF) [137,138,139]. A comprehensive treatment of Fourier NAH for Cartesian, cylindrical, and spherical coordinates can be found in the monograph by Williams [140]. ...
The objective of the European PBNv2 project (ETN GA721615, https://www.h2020-pbnv2.eu/) was to develop numerical and experimental approaches for the characterization of pass-by noise of new generation vehicles equipped with electric powertrains or hybrids. The project revolved around three axes: sources of noise, pathways and human subjects subject to noise. The thesis work presented here has made it possible to develop the "inverse Patch Transfer Function" (iPTF) method for a complete acoustic characterization of a source with complex geometry in an acoustically poorly controlled industrial context. The thesis dissertation is composed of three main parts. The first part deals with a detailed study of methods for regularizing an ill-posed problem. Many approaches (Tikhonov, Bayesian, iterative) combined with criteria for selecting optimal solutions are presented and illustrated on a numerical application case. The sensitivities of the approaches to disturbing noise and to the under-determination of the problem are notably evoked and explained. The second part exploits the concept of finite virtual volume to be able to take into account the presence of masking objects which generally prevent the application of classical near-field acoustic holography methods. The study demonstrated that the iPTF method is very robust even in the presence of an object masking a large part of the source (a rectangular plate in the example treated) and at a short distance. An experimental campaign applied to an electric motor in operation demonstrated the potential of the approach. Finally, the last part develops the concept of “blind separation”. This approach makes it possible to decompose the reconstructed fields into contributions coming from different uncorrelated vibrational sources. It thus makes it possible to improve the understanding of the phenomena at the origin of the radiated noise. A first validation of this approach on a numerical experiment is proposed.
... Sound fields can be described by sound intensity, which is a vector field that represents the flow of sound through a surface per unit area [31]; this is an intensity measurement at VOLUME 10, 2022 a point in space that results in a vector including magnitude and direction information on the sound field. It is expressed in the coordinate system as ...
Over the past decade, neural networks have been widely used for direction-of-arrival (DoA) estimation owing to their high accuracy in noisy and reverberant environments. Classes of singular-model classifiers generally correspond to discretized DoA candidate angles, which in the case of a three-dimensional estimation, are bounded by the grid derived from uniform sampling over the unit sphere. Motivated by this, we propose an ensemble learning approach for classification tasks to improve estimation accuracy, as the ensemble criterion outperforms the individual criterion. First, individual networks that make up the ensemble differ slightly by grid rotation according to the Euler rotation theorem to complement discrete directional information. Score fusion was performed in the spherical harmonic domain for a more stable ensemble classification because the grid rotation also involves an angular mismatch. Moreover, to achieve a more accurate DoA estimation, interpolation over the fused scores was performed. Performance analysis and a comparison of state-of-the-art parametric and deep learning-based methods in several acoustic situations were conducted to determine the accuracy of the ensemble and to analyze the gradual angular error reduction as a function of noise and reverberation levels as more networks were added.
... To simulate multi-channel recordings in large-scale forests, an efficient forest IR simulation algorithm was employed that models acoustic scattering in the forest using single scattering cylinders (SSCs) (Kaneko and Gamper, 2021). 1 A bird vocalization directivity model was incorporated by approximating the radiation from a bird as the radiation from a point source on a small spherical baffle. The far-field approximation of this radiated field is the following (Morse and Ingard, 1986;Williams, 1999): ...
Acoustic wildlife monitoring systems are important tools for capturing information about animal habitation in ecosystems. Previous work has demonstrated the effectiveness of audio-based bird localization techniques. However, few studies have investigated the performance and robustness of distributed systems in large forests. Here, the performance of distributed microphone arrays for localizing birds is examined by simulating forest scenes with added reverberation, ambient noise, and measurement errors. The simulation revealed the importance of the signal-to-noise ratio and the spectral weighting in the localization algorithm. These results may guide the design of large-scale wildlife monitoring systems and suggest promising directions for further improvements.
... It is widely held in textbooks and other scientific literature [1][2][3][4][5][6][7][8] that surface vibrations can only radiate sound waves into the fluid when the vibrational wave is supersonic, i.e., when its propagation speed c v exceeds the sound speed c f of the fluid. In contrast, when the vibration is subsonic, i.e., slower than the sound speed, the sound wave is evanescent; it clings to the vibrating surface and does not radiate out into the fluid. ...
It is well-known that vibrating surfaces generate sound waves in adjacent fluids. According to the classical radiation model, the nature of these waves depends on whether the vibration propagation speed cv is higher than (supersonic) or slower than (subsonic) the fluid sound speed cf. The transition between these two domains is known as coincidence. In the supersonic domain, the sound wave radiates into the fluid. In the subsonic domain, the classical model states that the wave becomes evanescent and clings to the surface. In the last 30 years, however, several articles on leaky guided waves have reported radiating waves in the subsonic domain, even though this is at odds with the classical model. In this article, we derive an enhanced model for sound radiation near and below coincidence. Unlike the classical model, this model fully respects conservation of energy by balancing the radiated power with power lost from the guided wave underlying the vibration. The model takes into account that this power loss and the consequent attenuation of the surface vibration result in an inhomogeneous radiated sound wave — an effect that cannot be neglected near coincidence. We successfully validate our model against exact solutions for leaky A0 Lamb waves around coincidence. The model can also be used as a perturbation method to predict the attenuation of leaky A0 waves from the properties of free A0 waves, giving more accurate estimates than existing perturbation methods. We further investigate subsonic leaky A0 waves using the enhanced model. Thereby we, for example, explain the peculiar reappearance or persistence of the A0 mode at lower frequencies, an effect brought to attention by previous theoretical studies.
... The RSP measurement over the antenna array,Q (x i , k), i = 1, . . . , M , can be converted to SH domain by an orthogonal spatial functions [48], ...
Spherical antenna array (SAA) is a configuration that scans almost all the radiation sphere with constant directivity. It finds applications in spacecraft and satellite communication. Multiple signal classification (MUSIC) is a widely used multiple source direction-of-arrival (DoA) estimation method because of its low complexity implementation in practical applications. Conversely, it is susceptible to noise, which consequently affects its accuracy of localization. In this paper, MUSIC-based methods that operate at low signal-to-noise ratio (SNR) are developed via relative electromagnetic (EM) wave pressure measurements of a SAA. The proposed methods are the relative pressure MUSIC (RP-MUSIC), and in spherical domain (SH-RP-MUSIC). The developed SH-RP-MUSIC algorithm is in spherical domain thereby allows frequency-smoothing approach for the de-correlation of the coherent source signals towards an enhanced accuracy of localization. Both RP-MUSIC and SH-RP-MUSIC algorithms developed have the ability to estimate the number of active sources that is
a priori
knowledge of the conventional MUSIC algorithm. Numerical experiments were used to demonstrate the adequacy of the developed algorithms. In addition, measured data from experiment, which is the practically acceptable way to examine any procedure is employed to demonstrate the merits of the developed algorithms against the conventional MUSIC algorithm and other recent multiple source localization method in literature. Finally, in order to achieve DoA estimations with adequate localization accuracy at low SNR using SAA, SH-RP-MUSIC algorithm is a better choice.
... Resonant frequencies, in this solid case, can be mostly found using numerical simulations. In Ref. [17], however, one can find an analytical study of the frequency resonances of a pulsing sphere of radius R filled with a liquid and harmonically forced on its boundary, resulting in f n = (n π v s )/R, with v s being sound speed and n = 1,2,3, . . . . It is important to stress that these frequencies have a similar mathematical structure to the previously discussed ones, differing by multiplicative numerical factors only. ...
A plant biological system is exposed to external influences. In general, each plant has its characteristics and needs with specific interaction mechanisms adapted to its survival. Interactions between systems can be examined and modeled as energy exchanges of mechanical, chemical or electrical variables. Thus, each specific interaction can be examined by triggering the system via a specific stimulus. The objective of this work was to study a specific stimulus (mechanical stimulation) as a driver of plants and their interaction with the environment. In particular, the experimental design concerns the setting up and testing of an automatic source of mechanical stimuli at different wavelengths, generated by an electromechanical transducer, to induce a micro-interaction in plants (or in parts of them) that produces a specific behavior (hypothesis) of plants. Four different experimental setups were developed for this work, each pursuing the same objective: the analysis of the germination process induced by stimulation by sound waves in the audible range. It can be said that the introduction of sound waves as a stimulant or a brake for the growth of plants can offer significant advantages when used on a large scale in the primary sector, since these effects can be used instead of polluting chemical solutions.
... An extended (not free) version of the book is also available (Margrave & Lamoureux, 2019). Finally, the book "Fourier Acoustics" by E.G. Williams (Williams, 1999) was essential in developing the theory for cylindrical imaging geometries. ...
... In our analysis of acoustojets phenomenon [7], in simulations we use the rigorous partial-wave expansion method [11], which depends on the beam-shape and scattering coefficients, to obtain the scattered pressure around the solid elastic spherical particle where both compressional and shear waves were taken into account. Simulations show [10] that for a 5λ radius sphere with relative refractive index of about 1.6, the acoustic jet remains under the diffraction limit by approximately few wavelengths in depth, with an intensity gain close to 20 dB relative to the incident intensity. ...
We predict acoustic super resonance modes with a field-intensity enhancement several tens of thousands of times higher (order of magnitude: 10⁴–10⁵) with the support of dielectric mesoscale spheres submerged in water, by means of numerical simulations. The super resonances are related to the internal dispersion in specific values of both Mie and particle material parameters, being responsible for the generation of giant fields within the particles and near its surface. Taking into account the analogy between electromagnetic and acoustic waves, this phenomenon is valid in the electromagnetic (optical) wave band.
... where h (1) n (·) is the nth-order spherical Hankel function of the first kind , Y m n (·) is the spherical harmonic function of order n and degree m, and (r b , θ b , φ b ) is the original source position in the spherical coordinates [19]. k l = 2πf l /v is the lth wave number, where f l and v are the lth frequency and speed of sound in the air, respectively. ...
We propose a method of head-related transfer function (HRTF) interpolation from sparsely measured HRTFs using an autoencoder with source position conditioning. The proposed method is drawn from an analogy between an HRTF interpolation method based on regularized linear regression (RLR) and an autoencoder. Through this analogy, we found the key feature of the RLR-based method that HRTFs are decomposed into source-position-dependent and source-position-independent factors. On the basis of this finding, we design the encoder and decoder so that their weights and biases are generated from source positions. Furthermore, we introduce an aggregation module that reduces the dependence of latent variables on source position for obtaining a source-position-independent representation of each subject. Numerical experiments show that the proposed method can work well for unseen subjects and achieve an interpolation performance with only one-eighth measurements comparable to that of the RLR-based method.
... • Kirchhoff-Helmholtz (KH) [2] integral formulates the exterior radiation in frequency domain ...
... • Kirchhoff-Helmholtz (KH) [1] integral models the exterior acoustic radiation a vibrating surface ...
Near-field Acoustic Holography (NAH) is a well-known problem aimed at estimating the vibrational velocity field of a structure by means of acoustic measurements. In this paper, we propose a NAH technique based on Convolutional Neural Network (CNN). The devised CNN predicts the vibrational field on the surface of arbitrary shaped plates (violin plates) with orthotropic material properties from a limited number of measurements. In particular, the architecture, named Super Resolution CNN (SRCNN), is able to estimate the vibrational field with a higher spatial resolution compared to the input pressure. The pressure and velocity datasets have been generated through Finite Element Method simulations. We validate the proposed method by comparing the estimates with the synthesized ground truth and with a state-of-the-art technique. Moreover, we evaluate the robustness of the devised network against noisy input data.
... The mode amplitudes can then be deduced from the particle velocity distribution using the orthogonality properties of their mode shape functions. The basis for the technique is the Fourier relationship between the radiated pressure on the TCS to the axial particle velocity distribution over the duct outlet [44] as outlined in section 2.3. Using appropriate sampling of source distribution and of sensors on the TCS, the discretised Rayleigh integral becomes equivalent to a Discrete Fourier Transform relationship, as shown by Kim and Nelson [43]. ...
p>An inverse technique for determining the mode amplitudes generated by turbofan inlets both for tonal and broadband noise is proposed using pressure measurements made in the near-field. The motivation of this research is to make use of the Turbulence Control Screen (TCS). This TCS offers a useful platform for locating microphones to implement a non-intrusive inverse technique since it is often fitted to aero-engines during ground testing to remove the integrated flow. The knowledge of such model content is very useful for characterizing source mechanisms of broadband noise and for determining the most appropriate mode distribution model for duct liner predictions and for sound power measurements of the radiated sound field. The near-field sound pressure radiated from a duct is modelled by directivity patterns of cut-on modes. The resulting system of equations is ill-posed and it is shown that the conditioning of the inverse problem, which depends greatly on the positions of the microphones, is important in assessing the sensitivity of the modal solution to measurement noise and thus the modal reconstruction accuracy. An optimal array geometry for robust inversion is investigated. It is then shown that the presence of modes with eigenvalues close to a cut-off frequency results in a poorly conditioned directivity matrix. A physical interpretation of the Singular Value Decomposition (SVD) of the directivity matrix throws light on the understanding of the issues of ill conditioning as well as the detection performance of the radiated sound field by a given sensor array. The detection of broadband modes generated by a laboratory-scaled fan inlet is performed using the optimal array geometry. This experiment provides a milestone for detecting modal content of broadband noise produced by real fan inlet engines.</p
... The governing equation of the 2-dimensional plane wave [30] is ...
Outlier detection in vibration signals can play an important role in addressing the issue of structural or environmental changes during vibration testing. In this study, a transformer-based model for outlier detection is proposed. Unlike previous statistical and regression outlier detection methods, the proposed model can identify the outlier location in a high dimensional observation space using the self-attention mechanism. The location of outliers within the vibration observation is marked by a combination of a spatial label and a temporal label. The outlier detection performance of the model is verified by a numerical study of the plane wave and an experimental study of the vibrating plate. These two studies show that the proposed model has good label prediction accuracies (all above 85%) toward the outlier location within the plane wave and vibrating plate observations.
... Much of the theory on modern SMA design and processing stems from the work of Meyer and Elko [31], Abhayapala and Ward [32], and Gover et al. [33], as well as Rafaely [34] [35] and Zotter [36] (among many others). Though the physical fundamentals can be traced back to the study of inverse problems in acoustical holography [37] (as well as Gerzon's [17] aforementioned approach to spatialized microphone setups), this work from the early 2000s provides the mathematical details for practical implementations from an engineering perspective, both in terms of array design and signal processing. Crucially, it makes the link between the SMA's spatial sampling configuration (i.e. the layout of the transducer positions on the sphere), the discretization of the SH transform, and the subsequent limitations of the captured sound field. ...
The use of spatial room impulse responses (SRIR) for the reproduction of three-dimensional reverberation effects through multi-channel convolution over immersive surround-sound loudspeaker systems has become commonplace within the last few years, thanks in large part to the commercial availability of various spherical microphone arrays (SMA) as well as a constant increase in computing power. This use has in turn created a demand for analysis and treatment techniques not only capable of ensuring the faithful reproduction of the measured reverberation effect, but which could also be used to control various modifications of the SRIR in a more “creative” approach, as is often encountered in the production of immersive musical performances and installations.
Within this context, the principal objective of the current thesis is the definition of a complete space-time-frequency framework for the analysis, treatment, and manipulation of SRIRs. The analysis tools should lead to an in-depth model allowing for measurements to first be treated with respect to their inherent limitations (measurement conditions, background noise, etc.), as well as offering the ability to modify different characteristics of the final reverberation effect described by the SRIR. These characteristics can be either completely objective, even physical, or otherwise informed by knowledge of human auditory perception with regard to room acoustics.
To this end, each of the three layers (analysis, treatment, and manipulation) is thoroughly described both theoretically and in terms of its practical implementation. The first two (analysis and treatment) are then rigorously evaluated through simulated validation tests, while illustrative "proof of concept" examples of the various manipulation methods subsequently serve to demonstrate the potential capabilities of the framework on real-world SMA SRIR measurements. Ultimately, these examples also help to open the discussion as to the many directions the work completed in this thesis could then be extended.
... Our work, however, is motivated by understanding how leaky Lamb and Rayleigh waves moving at a speed c v can radiate energy into an adjacent medium in the subsonic domain, where c v < c 0 . While several works demonstrate this phenomenon [4][5][6][7], the literature has widely regarded it as impossible; many references state that this can only occur in the supersonic domain, where c v > c 0 [8][9][10][11][12][13]. ...
A vibrating surface in contact with a solid material will generate P- and S-waves in the solid. When the surface vibration is spatially attenuated, we must take into account that the generated waves are always inhomogeneous. In an isotropic elastic solid, such inhomogeneous waves are attenuated perpendicularly to their direction of propagation. When the surface vibration's phase speed is lower than the P- and/or S-waves' speed of sound, the inhomogeneity affects the radiation of P- and S-waves in a major but relatively poorly understood way. For a better understanding, finding the total radiated intensity of the two inhomogeneous waves is key. Our work takes a step towards such an understanding by deriving analytical expressions for the velocity, strain, stress, and intensity fields of arbitrarily inhomogeneous P- and S-waves. Furthermore, we investigate whether the total radiated intensity can be found as the sum of the intensities of the individual P- and S-waves. We find that this is only possible when the surface vibration is unattenuated; for attenuated vibrations, the total radiated intensity should be calculated numerically.
This paper proposes a method to reduce the computational cost of the spectral division method that synthesizes moving sources. The proposed method consists of two approximations: that of the secondary source driving function and that of the trajectory of the moving sources. Combining these two approximations simplifies the integral calculations that traditionally appear in the driving functions, replacing them with a correction of the frequency magnitude and phase of the source signals. Numerical simulations and subjective experiments show that the computational cost can be reduced by a factor of 50–100 compared to the conventional method without significantly affecting the synthesized sound field and the sense of localization.
This document illustrates how to process the signals from the microphones of a rigid-sphere higher-order ambisonic microphone array so that they are encoded with N3D normalization and ACN channel order and thereby can be used with the standard ambisonic software tools such as SPARTA and the IEM Plugin Suite. A MATLAB script is provided.
Sound field analysis and reconstruction has been a topic of intense research in the last decades for its multiple applications in spatial audio processing tasks. In this context, the identification of the direct and reverberant sound field components is a problem of great interest, where several solutions exploiting spherical harmonics representations have already been proposed. However, the available techniques demand a large number of high-order microphones (HOMs) and high computational power in order to fulfill the necessary spatial sampling requirements, which can only be reduced by prior information obtained through acoustic measurements. Inspired by compressed sensing approaches, this paper proposes an alternative sparse formulation for estimating the exterior and interior sound field components in the spherical harmonics domain that allows to reduce hardware requirements without the need for additional acoustic measurements. The results show that a considerable reduction in the number of HOMs can be achieved while improving the estimation of the sound field components.
The effective modulation of acoustic fields is the most important property of acoustic metasurfaces. The realization of full-space wavefront control can significantly enhance the functionality of metasurfaces; however, the existing solutions to this problem are limited by the coupled modulations of the transmitted and reflected wavefronts. In this study, we demonstrate the possibility of controlling transmitted and reflected acoustic wavefronts in a decoupled manner with a passive structure. Simulated analyses of the parameter dependences of the transmission and reflection phases reveal that these phases can be combined arbitrarily within a range of structural parameters. Meanwhile, tunable designs increase the flexibility and simplicity of the modulation of acoustic waves. Using such a tunable structure, a transmission-reflection-integrated (TRI) metasurface is designed. By applying a single TRI metasurface, multiple independent functions are simultaneously realized in the transmitted and reflected regions, which is further confirmed by pancratic multifocal focusing (performed both experimentally and theoretically) and holographic imaging simulations. The simulated, calculated, and experimental data obtained demonstrate efficient wavefront control and excellent functional-integration performance of the TRI metasurface. In this paper, we propose a decoupled method for the simultaneous manipulation of reflected and transmitted acoustic waves, which can enhance the spatial utilization and functionality of acoustic devices.
Binaural rendering aims to immerse the listener in a virtual acoustic scene, making it an essential method for spatial audio reproduction in virtual or augmented reality (VR/AR) applications. The growing interest and research in VR/AR solutions yielded many different methods for the binaural rendering of virtual acoustic realities, yet all of them share the fundamental idea that the auditory experience of any sound field can be reproduced by reconstructing its sound pressure at the listener's eardrums. This thesis addresses various state-of-the-art methods for 3 or 6 degrees of freedom (DoF) binaural rendering, technical approaches applied in the context of headphone-based virtual acoustic realities, and recent technical and psychoacoustic research questions in the field of binaural technology. The publications collected in this dissertation focus on technical or perceptual concepts and methods for efficient binaural rendering, which has become increasingly important in research and development due to the rising popularity of mobile consumer VR/AR devices and applications. The thesis is organized into five research topics: Head-Related Transfer Function Processing and Interpolation, Parametric Spatial Audio, Auditory Distance Perception of Nearby Sound Sources, Binaural Rendering of Spherical Microphone Array Data, and Voice Directivity. The results of the studies included in this dissertation extend the current state of research in the respective research topic, answer specific psychoacoustic research questions and thereby yield a better understanding of basic spatial hearing processes, and provide concepts, methods, and design parameters for the future implementation of technically and perceptually efficient binaural rendering.
ResearchGate has not been able to resolve any references for this publication.