This study investigates the effects of acoustic and visual stimuli on subjective preferences for different seating positions in an Italian style theater .The sound and visual fields of ten positions in the theater were simulated in two laboratories, one equipped with a large-wide (7 m×3m) screen with headphones for the sound source, and the other equipped with a 42" LCD monitor with 6pairs of stereo loudspeakers. The sound signal was an anechoic soprano vocal music piece accompanied by a keyboard, convolved with binaural impulse responses at the ten seating positions. The visual stimuli were based on a CAD (computer-aided design)model of the theater, which included images of the singer,stage sets, and an audience. Subjective judgments by paired comparison tests were conducted in the two laboratories with different participant groups. By comparing the results of experimental trials using 1) visual stimuli only ,2)acoustic stimuli only ,and 3) both acoustic and visual stimuli, the effects of the acoustic and visual stimuli on seat preference could be examined. The results of the experiments with the two different stimulus presentation systems were similar despite having different participant groups. A subjective scale analysis showed a significant correlation with the sound level of the soprano signal as well as the clarity C80 for the stage (vocal) source.
A system for speech synthesis by rule is described which uses demisyllables (DSs) as phonetic units. The problem of concatenation is discussed in detail; the pertinent stage converts a string of phonetic symbols into a stream of speech parameter frames. For German about 1650 DSs are required to permit synthesizing a very large vocabulary. Synthesis is controlled by 18 rules which are used for splitting up the phonetic string into DSs, for selecting the DSs in such a way that the inventory size is minimized, and- last but not least - for concatenation. The quality and intelligibility of the synthetic signal is very good; in a subjective test the median word intelligibility dropped from 96.6% for a LPC vocoder to 92.1% for the DS synthesis, and the quality difference between the DS synthesis and ordinary vocoded speech was judged very small.
Low frequency emission by means of the nonlinear interaction of phase conjugate ultrasound beams is studied experimentally and theoretically. The phase conjugation effect is used for the automatic overlap of interacting focused beams of frequencies ω1/2π = 11 MHz and ω 2/2π = 10 MHz. The interaction of beams propagating in a homogeneous liquid and beams scattered by an object immersed in a liquid are considered. While variation in the object position for resonance interaction of co-directional waves does not affect the phase of the emitted wave of low frequency Ω- = Δω = ω1 - ω2, the anomalous phase shift proportional to Ω + ≅ (ω1 + ω2) ≫ Δω is obtained for non-resonance interaction of contra-propagating waves. The application of anomalous phase shift for "super resolution" in object positioning by means of nonlinear low frequency emission is discussed.
The pseudorandom sequence whose periods are relatively prime are introduced to measure reverberation decay curves in a room. One pseudorandom sequence is driven by d-times faster clock than the other and is switched after the polarity of the other sequence. After radiating the switched signal from a loudspeaker, the crosscorrelation between the switching sequence and d-times decimated envelope sequence of a microphone output is calculated. The decay curve obtained by the integration of the crosscorrelation through the formula given by Schroeder equals the ensemble average of the envelope of decay curves driven by white noise. It is shown that the decimation serves not only to decrease the amount of computation but also to increase the signal to noise ratio of the measured decay curves.
The investigation was carried out to satisfy the need for reliable information on resilient materials used for reducing the transmission of structure-borne noise. Two machines were designed for the dynamic testing of twin samples of material under known compressive loads, and their development and performance are described. Results on typical materials have been obtained over a range of frequencies from 1 cycle in 10 minutes to values in the lower audio register. The lower limiting value of dynamic stiffness has been called the ¿incremental stiffness,¿ and the method of measuring it is described. The effects of wave motion in soft rubber have been investigated for two small samples using a high-frequency apparatus; and at certain frequencies, depending on the dimensions of the specimen, the transmitted force for a given oscillatory deformation is shown to be about seven times that at low frequencies. By using simple beam arrangements, measurements of both creep and incremental stiffness have been made on three types of material, over a period of about 1000 hours of continuous loading. Results are presented showing that most creep occurs during the first 100 hours. The techniques described enable useful information on the performance of resilient materials, as used for the reduction of vibration, to be obtained.
The axial propagation of sound waves in a model consisting of parallel fibers is calculated. The viscous forces and the thermal conduction are taken into account. This leads to viscous waves and to thermal waves besides the usual acoustic compression wave. The potential function for the total field near a fiber is treated as the superposition of the radiated field from the fiber itself and of the scattered fields from all the other fibers. The explicit field equations for a regular square fiber arrangement is derived and the influence of the order of symmetry of the arrangement is discussed. This leads to simplifications in the field equations and to field equations for the case of a homogeneous fiber distribution.
This paper presents an analytical formulation for correcting the diffraction associated to the second harmonic of an acoustic wave, more compact than that usually used. This new formulation, resulting from an approximation of the correction applied to fundamental, makes it possible to obtain simple solutions for the second harmonic of the average acoustic pressure, but sufficiently precise for measuring the parameter of nonlinearity B/A in a finite amplitude method. Comparison with other expressions requiring numerical integration, show the solutions are precise in the nearfield.
MEASUREMENTS WERE CARRIED OUT WITH FOUR SUBJECTS TO DETERMINE THE LAWS DESCRIBING THE DEPENDENCE OF THE PERCEIVED ACOUSTIC ROUGHNESS AND FLUCTUATION STRENGTH ON THE PARAMETERS OF AN AMPLITUDE MODULATED PURE TONE (MODULATION FACTOR, LEVEL, MODULATION AND CARRIER FREQUENCY). IT IS SHOWN THAT FLUCTUATION STRENGTH AND ROUGHNESS ARE PROPORTIONAL TO THE SQUARE OF THE MODULATION FACTOR AND DOUBLE FOR ANY 20 DB INCREASE IN LEVEL. FURTHERMORE, THE INFLUENCE OF MODULATION AND CARRIER FREQUENCY YIELD INTERESTING INDICATIONS TO THE FREQUENCY SELECTION IN TEMPORALLY FLUCTUATING STIMULI.
A characteristic of woodwind instruments is the cutoff frequency of their
tone-hole lattice. Benade proposed a practical definition using the measurement
of the input impedance, for which at least two frequency bands appear. The
first one is a stop band, while the second one is a pass band. The value of
this frequency, which is a global quantity, depends on the whole geometry of
the instrument, but is rather independent of the fingering. This seems to
justify the consideration of a woodwind with several open holes as a periodic
lattice. However the holes on a clarinet are very irregular. The paper
investigates the question of the acoustical regularity: an acoustically regular
lattice of tone holes is defined as a lattice built with T-shaped cells of
equal eigenfrequencies. Then the paper discusses the possibility of division of
a real lattice into cells of equal eigenfrequencies. It is shown that it is not
straightforward but possible, explaining the apparent paradox of Benade's
theory. When considering the open holes from the input of the instrument to its
output, the spacings between holes are enlarged together with their radii: this
explains the relative constancy of the eigenfrequencies.
Abstract
In the present work, the generation and propagation of entropy noise is computed using a compressible URANS approach in combination with appropriate acoustic boundary conditions. The Entropy Wave Generator (EWG) experiment is best suited for validating the proposed approach and for investigating the acoustic sources of entropy noise. The EWG involves a non-reactive tube flow, where entropy modes are imposed to an incoming air flow by using a heating module. The generated temperature non-uniformities are accelerated with the mean flow downstream in a convergent divergent nozzle, thus producing entropy noise. Simulation results of pressure fluctuations and their spectra for a defined standard configuration as well as for diff erent operating points of the EWG agree very well with the respective experimental data. Additionally, an analysis of the acoustic sources was performed. For this purpose the acoustic sources caused by the acceleration of density inhomogeneities were calculated. For the first time, a numerical method is introduced for the localization of the acoustic sources of entropy noise in acceleration/deceleration regions.
Loudspeaker-based spatial audio reproduction schemes are increasingly used for evaluating hearing aids in complex acoustic conditions. To further establish the feasibility of this approach, this study investigated the interaction between spatial resolution of different reproduction
methods and technical and perceptual hearing aid performance measures using computer simulations. Three spatial audio reproduction methods – discrete speakers, vector base amplitude panning and higher order ambisonics – were compared in regular circular loudspeaker arrays with
4 to 72 channels. The influence of reproduction method and array size on performance measures of representative multi-microphone hearing aid algorithm classes with spatially distributed microphones and a representative single channel noise-reduction algorithm was analyzed. Algorithm classes
differed in their way of analyzing and exploiting spatial properties of the sound field, requiring different accuracy of sound field reproduction. Performance measures included beam pattern analysis, signal-to-noise ratio analysis, perceptual localization prediction, and quality modeling.
The results show performance differences and interaction effects between reproduction method and algorithm class that may be used for guidance when selecting the appropriate method and number of speakers for specific tasks in hearing aid research.
This article discusses aircraft noise effect assessment with noise effect indexes, such as have recently been developed
for noise monitoring purposes at the airports of Zurich and Frankfurt. Aircraft noise indexes are noise
assessment instruments that express the overall effects of aircraft noise as a single figure which reflects the total
amount of people that are in some way affected by the noise of a particular airport. By accounting for the
most important effect measures (such as annoyance or awakening reactions) and by weighting these measures
according to the population density at each grid point within a defined geographic perimeter, noise effect indexes
provide residents and authorities with an integral picture of the total noise effect. The paper reviews basic features
of noise effect indexes and reports about the development and the current practical application of such indexes.
Moreover, it points to specific not yet fully resolved issues such as accounting for the diurnal variation of noise
effects, the definition of calculation perimeters, and the weighting of day and night effects including the question
of unification of different effect measures into one index.
In the framework of acoustic and seismic monitoring of airports for verifying disarmament or peace-keeping agreements the propagation of sound emitted by aircraft close to the ground was investigated. The ground effect due to the porous, grass-covered ground is maximum at around 100 Hz to 200 Hz. The attenuation reaches up to c. 20 dB for a distance of 100 m. In addition, the microphones located close to the ground (h = 0.3 m) show a significant attenuation over a large frequency range above 200 Hz.By means of an experimentally determined surface impedance propagation calculations were carried out. Transfer functions were calculated which show the frequency dependent attenuation due to the porous ground for a given geometry. Using reference signals measured close to the sound source spectra were calculated corresponding to locations of microphones above the grass-covered ground. The calculated spectra were compared with the measured ones. In the case of transfer function measurements with white noise over a range of 20 m, the calculated spectra agree very well with the actually measured ones. In the case of the jet aircraft, the simulated spectra fit to those of the measured signals only in general appearance of amplitude, and not in phase. This is probably mainly due to an inaccurate localisation of the sound source, its extension, and directivity.
This study deals with the potential audibility of sonic booms from supersonic aircraft at the ground as a consequence of the aircraft flight parameters and the atmospheric variability. A ray tracing model is used to decide whether a sonic boom emitted downwards by a high flying supersonic aircraft hits the ground or is refracted upwards before it reaches the ground. Aircraft altitude, speed, and flight direction are systematically changed
within realistic ranges. The meteorological data rely on a homogeneous reanalysis dataset for eleven years over
the domain of Europe with a very high vertical resolution. The cases of sonic booms hitting or not hitting the ground are identified for various situations and respective frequency distributions are derived. In addition, the angle of incidence of rays arriving at the ground and the turning-level height of rays not reaching the ground are studied as the intensity of sonic booms also depends on these parameters.
It turned out that sonic-boom rays are refracted such that they do not reach the ground if the flight altitude is rather high (long propagation path) and the aircraft speed remains between Mach numbers of 1.05 and 1.25.
Sonic-boom rays not reaching the ground are furthermore possible if the aircraft heading is mainly upwind, the temperature mostly increases along the propagation direction of the ray, and a wind speed maximum exists below flight level. The probability of sonic-boom rays not reaching the ground generally increases from north to south, but also depends on the season.
Conditions on the elastic stiffnesses of anisotropic crystals are derived
such that circularly polarized longitudinal inhomogeneous plane waves with an
isotropic slowness bivector may propagate for any given direction of the normal
to the sagittal plane. Once this direction is chosen, then the wave speed, the
direction of propagation, and the direction of attenuation are expressed in
terms of the mass density, the elastic stiffnesses, and the angle between the
normal to the sagittal plane and the normals (also called "optic axes") to the
planes of central circular section of a certain ellipsoid. In the special case
where this angle is zero, and in this special case only, such waves cannot
propagate.
The present study investigates the effect of approximations of wind and temperature distributions in outdoor sound propagation modelling. Numerical estimations have been carried out by means of two different sound propagation models. One is based on a numerical integration of the linearized Euler equations, the other, a Lagrangian particle model, on ray-tracing methods. Both models are able to fully take into account three-dimensional atmospheric inhomogeneities. They use atmospheric input data from either a meteorological mesoscale model or a flow model. Two- and three-dimensional examples of sound propagation in inhomogeneous atmosphere over a screen and a hill are discussed. The effect of neglecting atmospheric inhomogeneities at all or single wind speed components is demonstrated as well as the accuracy of the effective sound speed approach. A horizontally homogeneous atmosphere approximation leads to significant deviations of up to 8 dB. In the case of sound propagation over a screen including distances up to 110 m, the effective sound speed approach yields accurate results. In the presence of a hill including distances up to 500 m however, the effective sound speed is less reliable. It leads to deviations of up to 3 dB.
Judgements of the loudness of pure-tone sound stimuli yield a loudness function which relates perceived loudness to stimulus amplitude. A loudness function is derived from physical evidence alone without regard to human judgments. The resultant loudness function is L=K(q-q0), where L is loudness, q is effective sound pressure (specifically q0 at the loudness threshold), and K is generally a weak function of the number of stimulated auditory nerve fibers. The predicted function is in agreement with loudness judgment data reported by Warren, which imply that, in the suprathreshold loudness regime, decreasing the sound-pressure level by 6 db results in halving the loudness.
The geometry of a floating bridge on a drumhead soundboard produces string
stretching that is first order in the amplitude of the bridge motion. This
stretching modulates the string tension and consequently modulates string
frequencies at acoustic frequencies. Early work in electronic sound synthesis
identified such modulation as a source of bell-like and metallic timbre. And
increasing string stretching by adjusting banjo string-tailpiece-head geometry
enhances characteristic banjo tone. Hence, this mechanism is likely a
significant source of the ring, ping, clang, and plunk common to the family of
instruments that share floating-bridge/drumhead construction.
A numerical finite-difference time-domain model of sound propagation over uneven, absorbing ground was used to simulate a selection of situations in an idealized two-dimensional cross-valley terrain profile with a valley bottom, a side slope and an elevated plateau. The distance of the source (e.g. a road on the valley bottom) from the foot of the slope and the steepness of the slope were varied. In addition, two different meteorological situations with thermally induced slope winds (upslope and downslope) were considered. The atmospheric fields (wind and temperature) were generated by precursory meteorological model runs and are consistent with the chosen topography. The power and spectral composition of the source was kept constant for all model runs. For computational reasons the source spectrum was limited to 100 – 1250 Hz. All situations were simulated with and without roadside noise barriers of uniform height and partly reflecting surfaces.
The results show that the slope is exposed to sound levels that are up to 10 dB higher than for plane ground at the same range. Slope winds lead to a further variation of the sound levels on the slope and the plateau of up to 5 dB. The local insertion loss of roadside barriers depends on the steepness of the slope, the position of the source, the meteorological condition, and the properties of the barriers. The model predicts that the insertion loss is reduced over steep slopes, above all for a source position close to the foot of the slope. On the other hand a high insertion loss is predicted for nocturnal downwind situations and rather weak slope gradients.
Bowing a string with a non-zero radius exerts a torque, which excites torsional waves. In general, torsional standing waves have higher fundamental frequencies than do transverse standing waves, and there is generally no harmonic relationship between them. Although torsional waves have little direct acoustic effect, the motion of the bow-string contact depends on the sum of the transverse speed v of the string plus the radius times the angular velocity (rw) . Consequently, in some bowing regimes, torsional waves could introduce non-periodicity or jitter to the transverse wave. The ear is sensitive to jitter so, while quite small amounts of jitter are important in the sounds of (real) bowed strings, modest amounts of jitter can be perceived as unpleasant or unmusical. It follows that, for a well bowed string, aperiodicities produced in the transverse motion by torsional waves (and other effects) must be small. Is this because the torsional waves are of small amplitude or because of strong coupling between the torsional and transverse waves? We measure the torsional and transverse motion for a string bowed by an experienced player over a range of tunings. The peaks in (rw), which occur near the start and end of the stick phase in which the bow and string move together, are only several times smaller than v during this phase.
A theoretical analysis of the subharmonic generation process in an acoustical resonator (interferometer) with plane walls is performed. It is shown that, when both the pumping wave and the generated subharmonic are detuned with respect to the resonator modes, the fields can display complex temporal behaviour such as self-pulsing and chaos. A discussion about the acoustical parameters required for the experimental observation of the phenomenon is given.
The tone hole geometry of a clarinet is optimized numerically. The instrument
is modeled as a network of one dimensional transmission line elements. For each
(non-fork) fingering, we first calculate the resonance frequencies of the input
impedance peaks, and compare them with the frequencies of a mathematically even
chromatic scale (equal temperament). A least square algorithm is then used to
minimize the differences and to derive the geometry of the instrument. Various
situations are studied, with and without dedicated register hole and/or
enlargement of the bore. With a dedicated register hole, the differences can
remain less than 10 musical cents throughout the whole usual range of a
clarinet. The positions, diameters and lengths of the chimneys vary regularly
over the whole length of the instrument, in contrast with usual clarinets.
Nevertheless, we recover one usual feature of instruments, namely that
gradually larger tone holes occur when the distance to the reed increases. A
fully chromatic prototype instrument has been built to check these
calculations, and tested experimentally with an artificial blowing machine,
providing good agreement with the numerical predictions.
Collisions play an important role in many aspects of the physics of musical instruments. The striking action of a hammer or mallet in keyboard and percussion instruments is perhaps the most important example, but others include reed-beating effects in wind instruments, the string/neck
interaction in fretted instruments such as the guitar as well as in the sitar and the wire/membrane interaction in the snare drum. From a simulation perspective, whether the eventual goal is the validation of musical instrument models or sound synthesis, such highly nonlinear problems pose
various difficulties, not the least of which is the risk of numerical instability. In this article, a novel finite difference time domain simulation framework for such collision problems is developed, where numerical stability follows from strict numerical energy conservation or dissipation,
and where a power law formulation for collisions is employed, as a potential function within a passive formulation. The power law serves both as a model of deformable collision, and as a mathematical penalty under perfectly rigid, non-deformable collision. Various numerical examples, illustrating
the unifying features of such methods across a wide variety of systems in musical acoustics are presented, including numerical stability and energy conservation/dissipation, bounds on spurious penetration in the case of rigid collisions, as well as various aspects of musical instrument physics.
Nonlinear plane acoustic waves propagating through a fluid are studied using Burgers' equation with finite viscosity. The evolution of a simple N-pulse with regular and random initial amplitude and of pulses with monochromatic and noise carrier is considered. In the latter case the initial pulses are characterized by two length scales. The length scale of the modulation function is much greater than the period or the length scale of the carrier. With increasing time the initial pulses are deformed and shocks appear. The finite viscosity leads to a finite shock width, which does not depend on the fine structure of the initial pulse and is fully determined by the shock position in the zero viscosity limit. The other effect of nonzero viscosity is the shift of the shock position from the position at zero viscosity. This shift, as well as the linear time, at which the nonlinear stage of evolution changes to the linear stage, depends on the fine structure of the initial pulse. It is also shown that the nonlinearity of the medium leads to generation of a nonzero mean field from an initial random field with zero mean value. The relative fluctuation of the field is investigated both at the nonlinear and the linear stage.