The Journal of the Acoustical Society of America

Publisher: Acoustical Society of America; American Institute of Physics. Online Journal Publishing Service, Acoustical Society of America


  • Impact factor
  • 5-year impact
  • Cited half-life
  • Immediacy index
  • Eigenfactor
  • Article influence
  • Other titles
    Journal of the Acoustical Society of America (Online), The Journal of the Acoustical Society of America
  • ISSN
  • OCLC
  • Material type
    Document, Periodical, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Acoustical Society of America

  • Pre-print
    • Archiving status unclear
  • Post-print
    • Author cannot archive a post-print version
  • Restrictions
    • 6 months for JASA
  • Conditions
    • On authors' institutional, governmental websites, as required by authors institution or funder
    • Authors version only on free E-print servers
    • Publisher copyright and source must be acknowledged
    • Publisher's version/PDF may be used on authors own or employers website only
    • Must link to publisher abstract
    • Set statements to accompany pre-print and post-print deposit
  • Classification
    ​ white

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.
    The Journal of the Acoustical Society of America 08/2014; 136(2):EL142.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The supersonic intensity is a quantity that represents the net acoustic output that a source couples into the medium; it can be regarded as a spatially low-pass filtered version of the active intensity. This spatial filtering can lead to significant error due to spatial truncation. In this paper, based on a space-domain formulation of the problem, the finite aperture error is analyzed and examined experimentally. The results indicate that the finite aperture error can be mitigated with the appropriate processing and that the supersonic intensity provides a valid quantitative representation of the effective radiation of acoustic sources.
    The Journal of the Acoustical Society of America 08/2014; 136(2):461.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study compared the head-related transfer functions (HRTFs) recorded from the bare ear of a mannequin for 393 spatial locations and for five different hearing aid styles: Invisible-in-the-canal (IIC), completely-in-the-canal (CIC), in-the-canal (ITC), in-the-ear (ITE), and behind-the-ear (BTE). The spectral distortions of each style compared to the bare ear were described qualitatively in terms of the gain and frequency characteristics of the prominent spectral notch and two peaks in the HRTFs. Two quantitative measures of the differences between the HRTF sets and a measure of the dissimilarity of the HRTFs within each set were also computed. In general, the IIC style was most similar and the BTE most dissimilar to the bare ear recordings. The relative similarities among the CIC, ITC, and ITE styles depended on the metric employed. The within-style spectral dissimilarities were comparable for the bare ear, IIC, CIC, and ITC with increasing ambiguity for the ITE and BTE styles. When the analysis bandwidth was limited to 8 kHz, the HRTFs within each set became much more similar.
    The Journal of the Acoustical Society of America 08/2014; 136(2):818.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The motions of a rigid and unconstrained prolate spheroid subjected to plane sound waves are computed using preliminary analytic derivation and numerical approach. The acoustically induced motions are found comprising torsional motion as well as translational motion in the case of acoustic oblique incidence and present great relevance to the sound wavelength, body geometry, and density. The relationship between the motions and acoustic particle velocity is obtained through finite element simulation in terms of sound wavelengths much longer than the overall size of the prolate spheroid. The results are relevant to the design of inertial acoustic particle velocity sensors based on prolate spheroids.
    The Journal of the Acoustical Society of America 08/2014; 136(2):EL179.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a Boundary Integral Equation Method (BIEM) is described for the computation of scattering from a finite, rigid, cylinder near a pressure release interface. The cylinder lies parallel or tilted with respect to the interface plane so that the azimuthal symmetry of the problem is destroyed. The scattering solution is first described in terms of an azimuthally symmetric free space solution. The multiple interactions of the scattered field with the interface are accounted for by an azimuthal-conversion matrix. In Sec. III, the method of this paper is benchmarked using wavefield superposition for a sphere near a pressure-release surface. Computed scattered spectra are shown for a finite cylinder, parallel and tilted with respect to the interface, and for a variety of source/receiver geometries. The differences resulting from not including multiple target/interface interactions (single scatter solution) and from including all interactions are presented. The problem of irregular frequencies for the single-scatter and the fully coupled BIEM are discussed and numerically examined.
    The Journal of the Acoustical Society of America 08/2014; 136(2):485.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution.
    The Journal of the Acoustical Society of America 08/2014; 136(2):867.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Phononic crystals (PCs) can exhibit phononic band gaps within which sound and vibrations at certain frequencies do not propagate. In fact, PCs with large band gaps are of great interest for many applications, such as transducers, elastic/acoustic filters, noise control, and vibration shields. Previous work in the field concentrated on PCs made of elastic isotropic materials; however, band gaps can be enlarged by using non-isotropic materials, such as piezoelectric materials. Because the main property of PCs is the presence of band gaps, one possible way to design microstructures that have a desired band gap is through topology optimization. Thus in this work, the main objective is to maximize the width of absolute elastic wave band gaps in piezocomposite materials designed by means of topology optimization. For band gap calculation, the finite element analysis is implemented with Bloch-Floquet theory to solve the dynamic behavior of two-dimensional piezocomposite unit cells. Higher order frequency branches are investigated. The results demonstrate that tunable phononic band gaps in piezocomposite materials can be designed by means of the present methodology.
    The Journal of the Acoustical Society of America 08/2014; 136(2):494.
  • [Show abstract] [Hide abstract]
    ABSTRACT: An alternative to the spectral overlap assessment metric (SOAM), first introduced by Wassink [(2006). J. Acoust. Soc. Am. 119(4), 2334-2350], is introduced. The SOAM quantifies the intra- and inter-language differences between long-short vowel pairs through a comparison of spectral (F1, F2) and temporal properties modeled with best fit ellipses (F1 × F2 space) and ellipsoids (F1 × F2 × duration). However, the SOAM ellipses and ellipsoids rely on a Gaussian distribution of vowel data and a dense dataset, neither of which can be assumed in endangered languages or languages with limited available data. The method presented in this paper, called the Vowel Overlap Assessment with Convex Hulls (VOACH) method, improves upon the earlier metric through the use of best-fit convex shapes. The VOACH method reduces the incorporation of "empty" data into calculations of vowel space. Both methods are applied to Numu (Oregon Northern Paiute), an endangered language of the western United States. Calculations from the VOACH method suggest that Numu is a primary quantity language, a result that is well aligned with impressionistic analyses of spectral and durational data from the language and with observations by field researchers.
    The Journal of the Acoustical Society of America 08/2014; 136(2):883.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Phonation models commonly rely on the assumption of a two-dimensional glottal geometry to assess kinetic and viscous flow losses. In this paper, the glottal cross-section shape is taken into account in the flow model in order to capture its influence on vocal folds oscillation. For the assessed cross-section shapes (rectangular, elliptical, or circular segment) the minimum pressure threshold enabling to sustain vocal folds oscillation is altered for constriction degrees smaller than 75%. The discrepancy between cross-section shapes increases as the constriction degree decreases.
    The Journal of the Acoustical Society of America 08/2014; 136(2):853.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The nonlinear propagation of spark-generated N-waves through thermal turbulence is experimentally studied at the laboratory scale under well-controlled conditions. A grid of electrical resistors was used to generate the turbulent field, well described by a modified von Kármán model. A spark source was used to generate high-amplitude ([Formula: see text] Pa) and short duration ([Formula: see text] [Formula: see text]) N-waves. Thousands of waveforms were acquired at distances from 250 to 1750 mm from the source ([Formula: see text]15 to 100 wavelengths). The mean values and the probability densities of the peak pressure, the deviation angle, and the rise time of the pressure wave were obtained as functions of propagation distance through turbulence. The peak pressure distributions were described using a generalized gamma distribution, whose coefficients depend on the propagation distance. A line array of microphones was used to analyze the effect of turbulence on the propagation direction. The angle of deviation induced by turbulence was found to be smaller than [Formula: see text], which validates the use of the parabolic equation method to model this kind of experiment. The transverse size of the focus regions was estimated to be on the order of the acoustic wavelength for propagation distances longer than 50 wavelengths.
    The Journal of the Acoustical Society of America 08/2014; 136(2):556.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Localizing a source of radial movement at moderate range using a single hydrophone can be achieved in the reliable acoustic path by tracking the time delays between the direct and surface-reflected arrivals (D-SR time delays). The problem is defined as a joint estimation of the depth, initial range, and speed of the source, which are the state parameters for the extended Kalman filter (EKF). The D-SR time delays extracted from the autocorrelation functions are the measurements for the EKF. Experimental results using pseudorandom signals show that accurate localization results are achieved by offline iteration of the EKF.
    The Journal of the Acoustical Society of America 08/2014; 136(2):EL159.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A few linear theories [Swift, J. Acoust. Soc. Am. 84(4), 1145-1180 (1988); Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and numerical models, based on low-Mach number analysis [Worlikar and Knio, J. Comput. Phys. 127(2), 424-451 (1996); Worlikar et al., J. Comput. Phys. 144(2), 199-324 (1996); Hireche et al., Canadian Acoust. 36(3), 164-165 (2008)], describe the flow dynamics of standing-wave thermoacoustic engines, but almost no simulation results are available that enable the prediction of the behavior of practical engines experiencing significant temperature gradient between the stack ends and thus producing large-amplitude oscillations. Here, a one-dimensional non-linear numerical simulation based on the method of characteristics to solve the unsteady compressible Euler equations is reported. Formulation of the governing equations, implementation of the numerical method, and application of the appropriate boundary conditions are presented. The calculation uses explicit time integration along with deduced relationships, expressing the friction coefficient and the Stanton number for oscillating flow inside circular ducts. Helium, a mixture of Helium and Argon, and Neon are used for system operation at mean pressures of [Formula: see text], [Formula: see text], and [Formula: see text] bars, respectively. The self-induced pressure oscillations are accurately captured in the time domain, and then transferred into the frequency domain, distinguishing the pressure signals into fundamental and harmonic responses. The results obtained are compared with reported experimental works [Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and the linear theory, showing better agreement with the measured values, particularly in the non-linear regime of the dynamic pressure response.
    The Journal of the Acoustical Society of America 08/2014; 136(2):649.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study compared pitch ranking, electrode discrimination, and electrically evoked compound action potential (ECAP) spatial excitation patterns for adjacent physical electrodes (PEs) and the corresponding dual electrodes (DEs) for newer-generation Cochlear devices (Cochlear Ltd., Macquarie, New South Wales, Australia). The first goal was to determine whether pitch ranking and electrode discrimination yield similar outcomes for PEs and DEs. The second goal was to determine if the amount of spatial separation among ECAP excitation patterns (separation index, Σ) between adjacent PEs and the PE-DE pairs can predict performance on the psychophysical tasks. Using non-adaptive procedures, 13 subjects completed pitch ranking and electrode discrimination for adjacent PEs and the corresponding PE-DE pairs (DE versus each flanking PE) from the basal, middle, and apical electrode regions. Analysis of d' scores indicated that pitch-ranking and electrode-discrimination scores were not significantly different, but rather produced similar levels of performance. As expected, accuracy was significantly better for the PE-PE comparison than either PE-DE comparison. Correlations of the psychophysical versus ECAP Σ measures were positive; however, not all test/region correlations were significant across the array. Thus, the ECAP separation index is not sensitive enough to predict performance on behavioral tasks of pitch ranking or electrode discrimination for adjacent PEs or corresponding DEs.
    The Journal of the Acoustical Society of America 08/2014; 136(2):715.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The role of spectro-temporal modulation cues in conveying tonal information for lexical tones was assessed in native-Mandarin and native-French adult listeners using a lexical-tone discrimination task. The fundamental frequency (F0) of Thai tones was either degraded using an 8-band vocoder that reduced fine spectral details and frequency-modulation cues, or extracted and used to modulate the F0 of click trains. Mandarin listeners scored lower than French listeners in the discrimination of vocoded lexical tones. For click trains, Mandarin listeners outperformed French listeners. These preliminary results suggest that the perceptual weight of the fine spectro-temporal modulation cues conveying F0 information is enhanced for adults speaking a tonal language.
    The Journal of the Acoustical Society of America 08/2014; 136(2):877.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Blind multichannel identification is generally sensitive to background noise. Although there have been some efforts in the literature devoted to improving the robustness of blind multichannel identification with respect to noise, most of those works assume that the noise is Gaussian distributed, which is often not valid in real room acoustic environments. This paper deals with the more practical scenario where the noise is not Gaussian. To improve the robustness of blind multichannel identification to non-Gaussian noise, a robust normalized multichannel frequency-domain least-mean M-estimate algorithm is developed. Unlike the traditional approaches that use the squared error as the cost function, the proposed algorithm uses an M-estimator to form the cost function, which is shown to be immune to non-Gaussian noise with a symmetric α-stable distribution. Experiments based on the identification of a single-input/multiple-output acoustic system demonstrate the robustness of the proposed algorithm.
    The Journal of the Acoustical Society of America 08/2014; 136(2):693.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
    The Journal of the Acoustical Society of America 08/2014; 136(2):EL173.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Australian snubfin and Indo-Pacific humpback dolphins co-occur throughout most of their range in coastal waters of tropical Australia. Little is known of their ecology or acoustic repertoires. Vocalizations from humpback and snubfin dolphins were recorded in two locations along the Queensland coast during 2008 and 2010 to describe their vocalizations and evaluate the acoustic differences between these two species. Broad vocalization types were categorized qualitatively. Both species produced click trains burst pulses and whistles. Principal component analysis of the nine acoustic variables extracted from the whistles produced nine principal components that were input into discriminant function analyses to classify 96% of humpback dolphin whistles and about 78% of snubfin dolphin calls correctly. Results indicate clear acoustic differences between the vocal whistle repertoires of these two species. A stepwise routine identified two principal components as significantly distinguishable between whistles of each species: frequency parameters and frequency trend ratio. The capacity to identify these species using acoustic monitoring techniques has the potential to provide information on presence/absence, habitat use and relative abundance for each species.
    The Journal of the Acoustical Society of America 08/2014; 136(2):930.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.
    The Journal of the Acoustical Society of America 08/2014; 136(2):777.

Related Journals