Hearing Research (HEARING RES )

Publisher: Elsevier

Description

The aim of the journal is to provide a forum for papers concerned with basic auditory mechanisms. Emphasis is on experimental studies, but theoretical papers will also be considered. The editor of the journal is prepared to accept original research papers in the form of full-length papers, short communications, letters to the Editor, and reviews. Papers submitted should deal with auditory neurophysiology, ultrastructure, psychoacoustics and behavioural studies of hearing in animals, and models of auditory functions. Papers on comparative aspects of hearing in animals and man, and on effects of drugs and environmental contaminants on hearing function will also be considered. Clinical papers will not be accepted unless they contribute to the understanding of normal hearing functions.

  • Impact factor
    2.54
    Show impact factor history
     
    Impact factor
  • 5-year impact
    2.74
  • Cited half-life
    0.00
  • Immediacy index
    0.62
  • Eigenfactor
    0.01
  • Article influence
    0.99
  • Website
    Hearing Research website
  • Other titles
    Hearing research
  • ISSN
    0378-5955
  • OCLC
    4410062
  • Material type
    Periodical, Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publisher details

Elsevier

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Voluntary deposit by author of pre-print allowed on Institutions open scholarly website and pre-print servers
    • Voluntary deposit by author of authors post-print allowed on institutions open scholarly website including Institutional Repository
    • Deposit due to Funding Body, Institutional and Governmental mandate only allowed where separate agreement between repository and publisher exists
    • Set statement to accompany deposit
    • Published source must be acknowledged
    • Must link to journal home page or articles' DOI
    • Publisher's version/PDF cannot be used
    • Articles in some journals can be made Open Access on payment of additional charge
    • NIH Authors articles will be submitted to PMC after 12 months
    • Authors who are required to deposit in subject repositories may also use Sponsorship Option
    • Pre-print can not be deposited for The Lancet
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Speech perception in noise is still difficult for cochlear implant (CI) users even with many years of CI use. This study aimed to investigate neurophysiological and behavioral foundations for CI-dependent speech perception in noise. Seventeen post-lingual CI users and twelve age-matched normal hearing adults participated in two experiments. In Experiment 1, CI users' auditory-only word perception in noise (white noise, two-talker babble; at 10 dB SNR) degraded by about 15 %, compared to that in quiet (48 % accuracy). CI users’ auditory-visual word perception was generally better than auditory-only perception. Auditory-visual word perception was degraded under information masking by the two-talker noise (69 % accuracy), compared to that in quiet (77 %). Such degradation was not observed for white noise (77 %), suggesting that the overcoming of information masking is an important issue for CI users’ speech perception improvement. In Experiment 2, event-related cortical potentials were recorded in an auditory oddball task in quiet and noise (white noise only). Similarly to the normal hearing participants, the CI users showed the mismatch negative response (MNR) to deviant speech in quiet, indicating automatic speech detection. In noise, the MNR disappeared in the CI users, and only the good CI performers (above 66 % accuracy) showed P300 (P3) like the normal hearing participants. P3 amplitude in the CI users was positively correlated with speech perception scores. These results suggest that CI users’ difficulty in speech perception in noise is associated with the lack of automatic speech detection indicated by the MNR. Successful performance in noise may begin with attended auditory processing indicated by P3.
    Hearing Research 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This multi-disciplinary research showed sound could be coded by electrical stimulation of the cochlea and peripheral auditory nervous system. But the temporal coding of frequency as seen in the experimental animal, for responses from single or groups of neurons was inadequate for the important speech frequencies. This was also consistent with the behavioural findings on the experimental animal and perception from electrical stimulation in people with cochlear implants. The data indicated the limitation was due in particular to deterministic firing of neurons and failure to reproduce the normal fine temporo-spatial pattern of neural responses. However, the data also showed the need for the place coding of frequency, and this meant multi-electrodes inserted into the cochlea. Nevertheless, before this was evaluated on people I undertook biological safety studies to determine the effects of surgical trauma and electrical stimuli, and how to prevent infection. Our further research demonstrated an important relation between the perception of basic stimuli and speech that led to our discovery in 1978 of the formant-extraction speech code that first enabled severely-profoundly deaf people to understand running speech. This result in people who had hearing before becoming severely deaf was an outcome not previously considered possible. This code became the forerunner for our advanced speech codes with additional formants, and with fixed filter outputs. When these codes were used for those born deaf or deafened early in life I discovered there was a critical period when brain plasticity would allow speech perception and language to be developed near- normally, and this required in particular the acquisition of place coding. Finally, I achieved binaural hearing in 1989 with bilateral cochlear implants, followed by bimodal speech processing in 1990 with a hearing aid in one ear and implant in the other. The above research has been developed industrially, with for example 250,000 people worldwide receiving the Cochlear device in 2013, and as of December 2012 the NIH estimated that approximately 324,200 people worldwide had received this and other implants (NIH Publication No. 11-4798).
    Hearing Research 08/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although many cochlear implant (CI) recipients perceive speech very well in favorable conditions, they still have difficulty with music, speech in noisy environments, and tonal languages. Studies show that CI users’ performance in these tasks are correlated with their ability to perceive pitch. The spread of stimulation field from the electrodes to the auditory nerve is one of the factors affecting performance. This study proposes a model of auditory perception to predict the performance of CI users in pitch ranking tasks using an existing sound processing scheme. The model is then used as a platform to investigate the effect of stimulation field spread on performance.
    Hearing Research 01/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The newfound context dependent brainstem encoding of speech is evidence of online regularity detection and modulation of the sub-cortical responses. We studied the influence of spectral structure of the contextual stimulus on context dependent encoding of speech at the brainstem, in an attempt to understand the acoustic basis for this effect. Fourteen normal hearing adults participated in a randomized true experimental design in whom brainstem responses were recorded. Brainstem responses for a high pass filtered /da/ in the context of syllables, that either had same or different spectral structure were compared with each other. The findings suggest that spectral structure is one of the parameters which cue the context dependent sub-cortical encoding of speech. Interestingly, the results also revealed that, brainstem can encode pitch even with negligible acoustic information below the second formant frequency.
    Hearing Research 06/2013;
  • Hearing Research 01/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Opto-electronic computer holographic measurements were made of the tympanic membrane (TM) in cadaveric chinchillas. Measurements with two laser wavelengths were used to compute the 3D-shape of the TM. Single laser wavelength measurements locked to eight distinct phases of a tonal stimulus were used to determine the magnitude and the relative phase of the surface displacements. These measurements were made at over 250,000 points on the TM surface. The measured motions contained spatial phase variations consistent with relatively low-order (large spatial frequency) modal motions and smaller magnitude higher-order (smaller spatial frequency) motions that appear to travel, but may also be explained by losses within the membrane. The measurement of shape and thin shell theory allowed us to separate the measured motions into those components orthogonal to the plane of the tympanic ring, and those components within the plane of the tympanic ring based on the 3D-shape. The predicted in-plane motion components are generally smaller than the out-of-plane perpendicular component of motion. Since the derivation of in-plane and out-of plane depended primarily on the membrane shape, the relative sizes of the predicted motion components did not vary with frequency. Summary: A new method for simultaneously measuring the shape and sound-induced motion of the tympanic membrane is utilized to estimate the 3D motion on the membrane surface.
    Hearing Research 12/2012;
  • Source
    Hearing Research 03/2012;
  • Hearing Research 05/2010; 263(s 1–2):250–251.
  • Hearing Research 03/2010;
  • Hearing Research 01/2010; 263:251-251.
  • Hearing Research 01/2010; 263:239-239.