The Journal of the Acoustical Society of America

Publisher: Acoustical Society of America; American Institute of Physics. Online Journal Publishing Service, Acoustical Society of America

Journal description

Current impact factor: 1.65

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2011 Impact Factor 1.55

Additional details

5-year impact 1.92
Cited half-life 0.00
Immediacy index 0.25
Eigenfactor 0.04
Article influence 0.58
Other titles Journal of the Acoustical Society of America (Online), The Journal of the Acoustical Society of America
ISSN 1520-8524
OCLC 38873939
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Acoustical Society of America

  • Pre-print
    • Archiving status unclear
  • Post-print
    • Author cannot archive a post-print version
  • Restrictions
    • 6 months for JASA
  • Conditions
    • On author's institutional website, governmental websites, as required by authors institution or funder
    • Authors version only on free E-print servers
    • Publisher copyright and source must be acknowledged
    • Publisher's version/PDF may be used on authors own or employers website only
    • Must link to publisher abstract
    • Set statements to accompany pre-print and post-print deposit
  • Classification
    ​ white

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: The influence of spatial separation in source distance on speech reception thresholds (SRTs) is investigated. In one scenario, the target was presented at 0.5 m distance, and the masker varied from 0.5 m distance up to 10 m. In a second scenario, the masker was presented at 0.5 m distance and the target distance varied. The stimuli were synthesized using convolution with binaural room impulse responses (BRIRs) measured on a dummy head in a reverberant auditorium, and were equalized to compensate for distance-dependent spectral and intensity changes. All sources were simulated directly in front of the listener. SRTs decreased monotonically when the target was at 0.5 m and the speech-masker was moved further away, resulting in a SRT improvement of up to 10 dB. When the speech masker was at 0.5 m and the target was moved away, a large variation across subjects was observed. Neither short-term signal-to-noise ratio (SNR) improvements nor cross-ear glimpsing could account for the observed improvement in intelligibility. However, the effect might be explained by an improvement in the SNR in the modulation domain and a decrease in informational masking. This study demonstrates that distance-related cues can play a significant role when listening in complex environments.
    The Journal of the Acoustical Society of America 02/2015; 137(2):757-767.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The radiation efficiency of damped plates is discussed in this letter. Below the critical frequency of a plate, numerical results show that the radiation efficiency is much influenced by damping. Some modifications of the classical formulas given by Cremer for an infinite plate and Leppington for a finite rectangular plate are proposed to include the influence of the damping on the radiation efficiency.
    The Journal of the Acoustical Society of America 02/2015; 137(2):1032.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Click evoked otoacoustic emissions (CEOAEs) are commonly used both in research and clinics to assay the medial olivocochlear system (MOC). Clicks presented at rates >50 Hz in the contralateral ear have previously been reported to evoke contralateral MOC activity. However, in typical MOC assays, clicks are presented in the ipsilateral ear in conjunction with MOC elicitor (noise) in the contralateral ear. The effect of click rates in such an arrangement is currently unknown. A forward masking paradigm was used to emulate typical MOC assays to elucidate the influence of ipsilateral click presentation rates on MOC inhibition of CEOAEs in 28 normal hearing adults. Influence of five click rates (20.83, 25, 31.25, 41.67, and 62.5 Hz) presented at 55 dB peSPL was tested. Results indicate that click rates as low as 31.25 Hz significantly enhance contralateral MOC inhibition, possibly through the activation of ipsilateral and binaural MOC neurons with potential contributions from the middle ear muscle reflex. Therefore, click rates ≤25 Hz are recommended for use in MOC assays, at least for 55 dB peSPL click level.
    The Journal of the Acoustical Society of America 02/2015; 137(2):724.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.
    The Journal of the Acoustical Society of America 02/2015; 137(2):884.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Frequency-importance functions (FIFs) quantify intelligibility contributions of spectral regions of speech. In previous work, FIFs were considered as instruments for characterizing intelligibility contributions of individual cochlear implant electrode channels. Comparisons of FIFs for natural speech and vocoder-simulated implant processed speech showed that vocoding shifted peak importance regions downward in frequency by 0.5 octaves. These shifts were attributed to voicing cue changes, and may reflect increased reliance on low-frequency information (apart from periodicity cues) for correct voicing perception. The purpose of this study was to determine whether increasing channel envelope bandwidth would reverse these shifts by improving access to voicing and pitch cues. Importance functions were measured for 48 subjects with normal hearing, who listened to vowel-consonant-vowel tokens either as recorded or as output from five different vocoders that simulated implant processing. Envelopes were constructed using filters that either included or excluded pitch information. Results indicate that vocoding-based shifts are only partially counteracted by including pitch information; moreover, a substantial baseline shift is present even for vocoders with high spectral resolution. The results also suggest that vocoded speech intelligibility is most sensitive to a loss of spectral resolution in high-importance regions, a finding with possible implications for cochlear implant electrode mapping.
    The Journal of the Acoustical Society of America 02/2015; 137(2):733.
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that infrasonic wind noise levels are lower for arrays placed in forests and under vegetation than for those in open areas. In this research, the wind noise levels, turbulence spectra, and wind velocity profiles are measured in a pine forest. A prediction of the wind noise spectra from the measured meteorological parameters is developed based on recent research on wind noise above a flat plane. The resulting wind noise spectrum is the sum of the low frequency wind noise generated by the turbulence-shear interaction near and above the tops of the trees and higher frequency wind noise generated by the turbulence-turbulence interaction near the ground within the tree layer. The convection velocity of the low frequency wind noise corresponds to the wind speed above the trees while the measurements showed that the wind noise generated by the turbulence-turbulence interaction is near stationary and is generated by the slow moving turbulence adjacent to the ground. Comparison of the predicted wind noise spectrum with the measured wind noise spectrum shows good agreement for four measurement sets. The prediction can be applied to meteorological estimates to predict the wind noise under other pine forests.
    The Journal of the Acoustical Society of America 02/2015; 137(2):651.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The study was carried out to determine whether cross-modal interactions occur during processing of auditory and/or visual signals that require separation/closure, integration, and duration pattern perception in typically developing children. Thirty typically developing children were evaluated on three auditory processing tests (speech-in-noise test in Indian-English, dichotic-consonant vowel test, and duration pattern test) that tapped separation/closure, integration and duration pattern perception. The children were also evaluated on the visual and auditory-visual analogues of the auditory tests. Differences in modality were found in each of the processes that were tested. The performance when the auditory and visual modalities were tested simultaneously was significantly higher than the auditory or visual modality for tests that involved separation/closure and integration. In contrast, scores on the analogous auditory-visual duration pattern test were significantly higher than the auditory test but not the visual analogous test. Further, the scores of the auditory modality were significantly poorer than the visual modality for separation/closure and duration patterning but not for integration. Findings of the study indicate that performance on higher level processing varies depending on the modality that is assessed and supports the presence of cross-modality interactions.
    The Journal of the Acoustical Society of America 02/2015; 137(2):923.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Monitoring the sensory consequences of articulatory movements supports speaking. For example, delaying auditory feedback of a speaker's voice disrupts speech production. Also, there is evidence that this disruption may be decreased by immediate visual feedback, i.e., seeing one's own articulatory movements. It is, however, unknown whether delayed visual feedback affects speech production in fluent speakers. Here, the effects of delayed auditory and visual feedback on speech fluency (i.e., speech rate and errors), vocal control (i.e., intensity and pitch), and speech rhythm were investigated. Participants received delayed (by 200 ms) or immediate auditory feedback, while repeating sentences. Moreover, they received either no visual feedback, immediate visual feedback, or delayed visual feedback (by 200, 400, and 600 ms). Delayed auditory feedback affected fluency, vocal control, and rhythm. Immediate visual feedback had no effect on any of the speech measures when it was combined with delayed auditory feedback. Delayed visual feedback did, however, affect speech fluency when it was combined with delayed auditory feedback. In sum, the findings show that delayed auditory feedback disrupts fluency, vocal control, and rhythm and that delayed visual feedback can strengthen the disruptive effect of delayed auditory feedback on fluency.
    The Journal of the Acoustical Society of America 02/2015; 137(2):873.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the global subjective assessment, obtained from mean values of the results of surveys addressed to members of the audience of live concerts in Spanish auditoriums, through the mean values of the three orthogonal objective parameters (Tmid, IACCE3, and LEV), expressed in just noticeable differences (JNDs), regarding the best-valued hall. Results show that a linear combination of the relative variations of orthogonal parameters can largely explain the overall perceived quality of the sample. However, the mean values of certain orthogonal parameters are not representative, which shows that an alternative approach to the problem is necessary. Various possibilities are proposed.
    The Journal of the Acoustical Society of America 02/2015; 137(2):580.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this work is to evaluate the effects of the heterogeneity and anisotropy of material properties of cortical bone on its ultrasonic response obtained by using axial transmission method. The heterogeneity and anisotropy of material properties are introduced by using a parametric probabilistic model. The geometrical configuration of the tested sample is described by a tri-layer medium composed of a heterogeneous and anisotropic solid layer sandwiched between two acoustic fluid layers of which one of these layers is excited by an acoustic linear source. The numerical results focus on studying of an interest quantity, called velocity of the first arriving signal, showing that it strongly depends on the dispersion induced by statistical fluctuations of stochastic elasticity field.
    The Journal of the Acoustical Society of America 02/2015; 137(2):668.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The experimental observation of long- and short-latency components in both stimulus-frequency and transient-evoked otoacoustic emissions admits a comprehensive explanation within the coherent reflection mechanism, in a linear active transmission-line cochlear model. A local complex reflectivity function associated with roughness was defined and analyzed by varying the tuning factor of the model, systematically showing, for each frequency, a multiple-peak spatial structure, compatible with the observed multiple-latency structure of otoacoustic emissions. Although this spatial pattern and the peak relative intensity changes with the chosen random roughness function, the multiple-peak structure is a reproducible feature of different "digital ears," in good agreement with experimental data. If one computes the predicted transmission delays as a function of frequency and position for each source, one gets a good match to the latency-frequency patterns that are directly computed from synthesized otoacoustic spectra using time-frequency analysis. This result clarifies the role of the spatial distribution of the otoacoustic emission sources, further supporting the interpretation of different-latency otoacoustic components as due to reflection sources localized at different places along the basilar membrane.
    The Journal of the Acoustical Society of America 02/2015; 137(2):768.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study focuses on imaging local changes in heterogeneous media. The method employed is demonstrated and validated using numerical experiments of acoustic wave propagation in a multiple scattering medium. Changes are simulated by adding new scatterers of different sizes at various positions in the medium, and the induced decorrelation of the diffuse (coda) waveforms is measured for different pairs of sensors. The spatial and temporal dependences of the decorrelation are modeled through a diffuse sensitivity kernel, based on the intensity transport in the medium. The inverse problem is then solved with a linear least square algorithm, which leads to a map of scattering cross section density of the changes.
    The Journal of the Acoustical Society of America 02/2015; 137(2):660.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Formant bandwidth estimation is often observed to be more challenging than the estimation of formant center frequencies due to the presence of multiple glottal pulses within a period and short closed-phase durations. This study explores inherently different statistical properties between linear prediction (LP)-based estimates of formant frequencies and their corresponding bandwidths that may be explained in part by the statistical bounds on the variances of estimated LP coefficients. A theoretical analysis of the Cramér-Rao bounds on LP estimator variance indicates that the accuracy of bandwidth estimation is approximately twice as low as that of center frequency estimation. Monte Carlo simulations of all-pole vowels with stochastic and mixed-source excitation demonstrate that the distributions of estimated LP coefficients exhibit expectedly different variances for each coefficient. Transforming the LP coefficients to formant parameters results in variances of bandwidth estimates being typically larger than the variances of respective center frequency estimates, depending on vowel type and fundamental frequency. These results provide additional evidence underlying the challenge of formant bandwidth estimation due to inherent statistical properties of LP-based speech analysis.
    The Journal of the Acoustical Society of America 02/2015; 137(2):944.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a sparse linear prediction based algorithm to estimate time difference of arrival. This algorithm unifies the cross correlation method without prewhitening and that with prewhitening via an ℓ2/ℓ1 optimization process, which is solved by an augmented Lagrangian alternating direction method. It also forms a set of time delay estimators that make a tradeoff between prewhitening and non-prewhitening through adjusting a regularization parameter. The effectiveness of the proposed algorithm is demonstrated in noisy and reverberant environments.
    The Journal of the Acoustical Society of America 02/2015; 137(2):1044.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.
    The Journal of the Acoustical Society of America 02/2015; 137(2):976.