Science topic

Hearing Research - Science topic

Explore the latest questions and answers in Hearing Research, and find Hearing Research experts.
Questions related to Hearing Research
  • asked a question related to Hearing Research
Question
3 answers
Hy!
I've read in many articles that cochleogram was made from the basal, medial and apical part of the Organ of Corti. I would like to know where the border is between basal and medial part, in addition, between medial and apical part in a mouse. For example, the first 40% of the basilar membrane is the basal part, the next 30% is the medial part and the last 30% is the apical part. 
Do you know it? I've tried to find it but I haven't found the answer yet.
Relevant answer
Answer
I've found an article which can be useful:
S Ching-Yu Lin; P R Thorne; G D Housley; S M Vlajkovic Purinergic Signaling and Aminoglycoside Ototoxicity: The Opposing Roles of P1 (Adenosine) and P2 (ATP) Receptors on Cochlear Hair Cell Survival. Frontiers in Cellular Neuroscience 2019. DOI: 10.3389/fncel.2019.00207
According to the authors of the article:
Apical part - 30%
Medial part - 40%
Basal part - 30%
  • asked a question related to Hearing Research
Question
28 answers
Many studies reported an association between nutrition and human hearing loss. These studies showed the incidence of hearing loss was increased with the lack of micro-nutrients such as vitamins A, B, C, E, zinc, magnesium, selenium and iron.Moreover, high carbohydrate, fat, and cholesterol intake, or lower protein intake, are responsible for poor hearing status.
Dear colleagues, Any more studies or experience about the relation between nutrition and hearing loss?
Relevant answer
Answer
I think the following article will be useful
  • asked a question related to Hearing Research
Question
2 answers
Hello everyone, I'm currently working on a project about auditory hearing loss research in zebrafish. We need to use anti-vibration tables so we can get the best result of auditory signals. Does anyone know what types and brands of anti-vibration tables are being used in current auditory research? Thanks
Relevant answer
Answer
We've used TMC tables since '78. Good damping & no pendulum motion. Go for the biggest you have space for. All ours are 3'x5' or larger.
  • asked a question related to Hearing Research
Question
19 answers
Hi, I have a patient. she is 4 years old. Her hearing loss has been diagnosed 1 and half year a go. She has a severe to profound hearing loss. Parents claim that she was OK before and she gradually has lost her hearing. As there was not any previous hearing evaluation (even no hearing screening at birth!), we can not confirm that. She received hearing aid and auditory rehabilitation right away. Since then she has had 3 sudden reduction of hearing to profound hearing loss (parents recognize that because she do not react to sound at all with her hearing aid).
Otologist prescribed corton therapy and ketotifen... for two weeks in first two episodes. She had cold in one of them. She showed recovery after that. Today she came to me with same problem (again sudden reduction of hearing to profound and no reaction to sound). 
What do you think is the underlying cause? (some thing is wrong for sure)
Can it be an autoimmune disease? (she seems totally normal and her blood test is normal)
Parents ask me is there a neural problem or cochlear? (How can I be sure?!)
Parents ask me if cochlear implant will resolve the problem? 
Please help us. Thank you
Relevant answer
Answer
Acoustic reflexes would likely not be present even in the event of a sensory loss due to elevated thresholds. I would suggest testing otoacoustic reflexes, auditory  brainstem response, and try to elicit an acoustic startle. I would suspect that the OAEs would be absent and the ABR to click stimuli present with prolonged wave V latencies. You may get a startle reflex if you use a broadband stimulus of high enough intensity. If OAEs are present you should perform electrocochleography looking for coclear microphonic - suspect auditory neuropathy/dysynchrony.
  • asked a question related to Hearing Research
Question
15 answers
Hi, I have noticed that electrocochleography shows endolymphatic hydropse in quite large population who suffer from vertigo. Some times this result is not accompanied by low tone loss. What can be the reason? Is there possible that some drugs that patient has taken or any other exogenic material makes temporary hydropse in inner ear? Do you suggest any special diet before EcochG test??
We have a lot of malingers in our setting. they pretend that they suffer from vertigo. Internet is full of information about vertigo so they can easily pretend to have vertigo. Many times VNG (video nystagmography) shows totally normal results but EcochG shows high SP/AP. How we can be sure this high SP/AP is indicative of Meniere's disease?
Relevant answer
Answer
"what medicction is most effective for meniere spectrum disorder now a days?"
One of the interesting observations by reliable observers that I picked up from reading the old literature is that the same drug or toxin that triggered Meniere's disease in some patients was used as a treatment for it by other doctors, eg quinine.  So there is unlikely to be a simple answer to this question of treatment. 
  • asked a question related to Hearing Research
Question
4 answers
Its an useful test in perilympatic fistula cases. But can someone explain how it works exactly . Thanks.
Relevant answer
Answer
Yes sir. Thank you. I was asking the reason behind that...
  • asked a question related to Hearing Research
Question
3 answers
Hi all,
Is there a way to extract the frequency modulation spectrum of speech, just like we extract the amplitude modulation spectrum ?
best
Nike
Relevant answer
Answer
Dear Nike, 
       Speech has simultaneous spectral (Frequency) and temporal (Amplitude) modulation. When we perform spectral analysis of speech, spectral modulations gets nullified by temporal modulations.
     If you are aware of Spectral Ripple Noise, its an example of only spectral modulation. When we perform spectrum we can clearly visualize the periodical amplitude fluctuations of the signal. But when temporal modulation is added to spectral ripple noise, spectrum shows flat. When when we perform spectrogram  can entangle both and display.
   So simple spctrogram is one method for looking at frequency modulations in the speech signal. The modulation toolbox is MATLAB code developed exclusively for independently varying the spectral and temporal modulations in speech signal.  
    Vijay
  • asked a question related to Hearing Research
Question
3 answers
My mice (10 weeks,male/female, BL6 background) show decreased startle response in the acoustic startle paradigm.
How can I test hearing capability in a non-invasive and non-cognitive based approach ?
thanxc in advance, regards, Roland
Relevant answer
Answer
Hello Roland,
I think "hearing capability" may have more than one interpretation, but have you considered ABR or DPOAE measures?  Both of these measures would require anesthesia and are minimally invasive.  
Dan
  • asked a question related to Hearing Research
Question
3 answers
Would we expect a high frequency adapter to have any effect on a low frequency target in a horizontal localization task? I mean in comparison to a low frequency adapter and a low frequency target. Does anybody know of any studies on these types of interaction? Sometimes a clue close enough to this can point me in the right direction if I follow the breadcrumbs. 
Thank you very much in advance
Relevant answer
Answer
Hi Jonas, thank you for your answer. It certainly is close to the topic. I think I found what I was looking for. Actually, it was a study by one of the researchers here at the IHR: 
Briley PM, Krumbholz K. The specificity of stimulus-specific adaptation in human auditory cortex increases with repeated exposure to the adapting stimulus. J Neurophysiol [Internet]. 2013;110(12):2679–88. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24047909
This study was able to show adaptation as a function of adapter-probe frequency separation. Adaptation is stronger when there is no Δf, but it is still present (although in a lesser degree; 50% adaptation) when the Δf is as high as 1.5 octaves.
Hope this is useful for others.
Kind regards
Nuno
  • asked a question related to Hearing Research
Question
6 answers
 what kind of AEP do you suggest??
Relevant answer
Answer
Tone Burst ABR, ASSR - both air and Bone conduction. In addition it is vitally important to know that performing  any AEP on a child has to be carefully done, with the correct settings (not necessarily the ones programmed by the company) , with very careful interpretation of results. All this should be preceded by tympanomtery and acoustic reflex testing , all the time ensuring that the results from each component of the test battery correlate with the AEP results. Behavioural testing and parent reports of the child's response to sound are essential to ensure no errors have been made from the AEP testing.
  • asked a question related to Hearing Research
Question
5 answers
I am asking this because  Iam wondering that why the CI has a high signal rates?. If there ıs a lımıtatıon for 5000 spike/sn then we don't need so high impulse rates for Cochlear implants. I just looked for that and there are two opinon one is says that maximum rate is 1000 spike/sn the other one is 5000 spike/sn. The last one comes from Rutherfords hearing theory. We know that this theory is wrong because over 5000 Hz this theory can't work depending on the maksimum response for the cohlea..
Relevant answer
Answer
Based on my personal experience in recording single units, the maximum sustained firing rate in individual cochlear neurons (in mammalian species) will rarely exceed 200 spikes/s. There may be some pathological conditions (e.g. cochlear excitotoxicity) when refractory period limitations are minimal. Kiang reported modal values of inter-spike intervals between 4-7ms, equivalent to rates between 150-250 spikes/s.
  • asked a question related to Hearing Research
Question
7 answers
I need to use this in a research project, and I would appreciate any referrals.  
Thank you in advance. 
Relevant answer
Answer
What degree of unilateral hearing loss are you considering? I have seen losses such as that from above to total unilateral deafness. The trick is to make sure that you have used enough contralateral masking of the better ear to make sure that the responses you are seeing are indeed those of the better ear and not  responses to a cross-over  signal heard in the better ear. 
  • asked a question related to Hearing Research
Question
4 answers
Also, how reliable are BIC responses? Stollman et al (1996) article mentions a detection rate of 95-97%. Any personal or clinical experiences?
Relevant answer
Answer
What do you mean by "reliable"? If by reliable you mean present  and easily detectable in all normal listeners, a main issue with the ABR BIC is that it is small in amplitude and sometimes hard to see in background noise (especially since the subtraction procedure results in the waveform noise increasing 1.4X noise). One must ensure sufficient trials are recorded to ensure waveform noise is 1/3rd to max 1/2 the amplitude of the BIC.
As already noted, BICs for MLRs and especially N1-P2 are much larger in amplitude. Numerous pubs have shown this, for example:
Picton, T. W., Rodriguez, R. T., Linden, R. D., & Maiste, A. C. (1985). The neurophysiology of human hearing. Human Communication Canada, 9, 127-136
McPherson, D. L., & Starr, A. (1993). Binaural interaction in auditory evoked potentials: brainstem, middle- and long-latency components. Hearing Research, 66, 91-98.
Fowler, C. G., & Mikami, C. M. (1996). Phase effects on the middle and late auditory evoked potentials. Journal of the American Academy of Audiology, 7, 23-30.
If by "reliable", however,  you refer to how well it determines normal vs impaired binaural processing. Well, there are few data concerning this and certainly far too few data for it to be used clinically.  See for example:
Levine, R. A., Gardner, J. C., Fullerton, B. C., et al. (1993). Effects of mulitple sclerosis brainstem lesions on sound lateralization and brainstem auditory evoked potentials. Hearing Research, 68, 73-88.
Pratt, H., Polyakov, A., Ahronson, V., et al. (1998). Effects of localized pontine lesions on auditory brain-stem evoked potential and binaural processing in humans. Electroencephalography and clinical neurophysiology, 108, 511-520.
Delb, W., Strauss, D. J., Hohenberg, G., & Plinkert, P. K. (2003). The binaural interaction component (BIC) in children with central auditory processing disorders (CAPD). [Comparative Study]. Int J Audiol, 42(7), 401-412.
He, S., Brown, C. J., & Abbas, P. J. (2012). Preliminary results of the relationship between the binaural interaction component of the electrically evoked auditory brainstem response and interaural pitch comparisons in bilateral cochlear implant recipients. Ear Hear, 33(1), 57-68. doi: 10.1097/AUD.0b013e31822519ef
Hope this helps.
  • asked a question related to Hearing Research
Question
9 answers
language, communication, emotion recognition, and empathy   
Relevant answer
Answer
For empathy, you could look at what these people did: Empathy Development in Deaf Preadolescents.
I don't know about the validity of their results, but they state:  "The results demonstrate that deaf preadolescents have more difficulty with empathy development than hearing children, and this ability is related to onset of deafness."
  • asked a question related to Hearing Research
Question
6 answers
There is a current discussion about how to define and measure listening effort. I stumbled over a measure called 'acceptable noise level' (ANL, see e.g. Nabelek et al, 2006). How much does the research community think that ANL is associated with listening effort? Some researchers already answered me that listening effort has nothing to do with ''the comfort of listening" but I am not quite sure I would agree.
Relevant answer
Answer
To the best of my knowledge, no one has yet tested whether ANLs are related to listening effort.  I only know of two papers—both very recent--that have investigated the cue underlying ANLs (below). For most people, it is still unclear what criterion they are using to determine their ANLs.  I think testing whether there is a relationship between ANLs and listening effort is the next step in this field. 
Gordon-Hickey, S., Morlas, H. (2015). Speech recognition at the Acceptable Noise Level. J Am Acad Audiol, 26, 433-450.
Recker, K., McKinney, M. F., Edwards, B. (2014). Loudness as a cue for acceptable noise levels. J Am Acad Audiol, 25, 605-623.
  • asked a question related to Hearing Research
Question
4 answers
One of my students is doing a research with mothers of hard of hearing children and their children. We want to use collaborative story telling and dialogic reading
Relevant answer
Answer
The staff of the Shared Reading Project at Gallaudet should be able to provide useful information. https://www.gallaudet.edu/clerc-center/information-and-resources/training-and-technical-assistance/workshops-and-training-institutes/srp-workshop-details.htmlThere is a Shared Reading Project site in our area and the families rave about it (whether their children have auditory access or not).
  • asked a question related to Hearing Research
Question
9 answers
In my institute (KAUH, KSU, RIYADH, KSA), during the work with my colleagues, CI surgeons (Prof. Al-Muhaimeed & Prof. Attallah, of the oldest & the best CI surgeons in KSA), we faced difficult cochleostomy despite a normal patent cochlea as confirmed by preoperative CT-Temporal.
2009, Of my knowledge, the above mentioned colleagues et al are the first worldwide who mentioned the explanation of this dilemma, mentioning a tilted (rotated cochlea) in their published article:
"Al-Muhaimeed HS, Al-Anazy F, Attallah MS, Hamed O. Coclear mplantation at King Abdulaziz University Hospital, Riyadh, Saudi Arabia: a 12-year experience. J Laryngol Otol 2009; 123:e20." 
2010, Lloyd et al suggested a predictive tool which could diagnose a rotated cochlea by the preoperative CT-Scan, axial temporal bone, measuring the cochlear basal turn angle (BTA) in their published article:
"Lloyd SK, Kasbekar AV, Kenway B, Prevost T, Hockman M, Beale T,
Graham J Developmental changes in cochlear orientation – implications
for cochlear implantation. Otol Neurotol 2010; 31:902–907."
2015, of my knowledge, My colleague (Prof. Al-Muhaimeed HS) & I (Abdelwahed HY) were the first worldwide who investigated retro-prospectively the above mentioned predictive tool of Lloyd et al (BTA) & found that it was indicator & we suggested the solution to make cochleostomy more easy in such encountered difficult cases as mentioned in our published article:
"Al-Muhaimeed H S & Abdelwahed H Y. Difficult cochleostomy in the normal cochlea,  Egypt J Otolaryngol, 2015 Jul, 31(3): 149-155.  DOI: 10.4103/10125574.159791. Source: http://www.ejo.eg.net/preprintarticle.asp?id=159791.   1012-5574 © 2015 The Egyptian Oto - Rhino - Laryngological Society.' 
I hope that all worldwide CI surgeons share my topic with their valuable comment & experiences regarding this important topic. 
Relevant answer
Answer
Dear colleagues,  
a few years ago I faced exactly the same problem in one implantation after more than 18 years of experience. 
After drilling longer than expected, in the usual position and angle for the cochleostomy   some bleeding appeared and I had to end surgery without implanting. The cochlea was patent in the preop CT. After a postop CT, we realized that the cochlea was slightly rotated, the direction of drilling was inappropriately tangential and it reached the carotid. One week later in a new a surgery, I thinned the promontory, I exposed the stria line and proceed to implant easily.
Fortunately (for the attendants) it was during our temporal dissection and otosurgery and now I offer my experience for if it is useful to any colleague.
Prof.Dr Rafael Urquiza MD PhD
University of Malaga. Spain
  • asked a question related to Hearing Research
Question
14 answers
What is the advantage of using dBnHL over dBpeSPL. The description of dBnHL, I understand that it is calculated by taking the difference between dB peSPL and behavioral threshold @ one repetition rate. If we are calculating at one rate how has this value been generalized for other reputation rate (30.1/sec, 90.1/sec). From the psycho-acoustics it is understood that behavioral threshold is better at higher rate (90.1/sec) than lower rate. Are there any standards which specify which rate should be used and why.
Relevant answer
Answer
This is an interesting question. I have been "involved" with this topic since the 1908s, thus my answer is somewhat long, and certainly reflects my personal opinion.  However, any answer to this question is not without controversy.
As noted above, the primary reason for using "dBnHL" is to account for the differences between behavioural thresholds at each frequency, so that "0 dBnHL" represents the median or mean threshold for normal adults. This is the same concept as "dB HL", which uses (nearly) continuous tones. As also noted above, however, in contrast with behavioural HL, even with the "nHL" correction, one still has to apply "eHL" correction factors reflecting  differences between AEP (usually ABR) thresholds and behavioural thresholds; correction factors which are different for infants compared to adults.
Unfortunately, it is not so simple as dBHL, because "dBnHL" has had to be calculated for so many different stimuli (e.g., clicks for ABR, 5-cycle brief tones for ABR, chirp stimuli for ABR or ASSR, 40-60-ms tones for corticals). Complicating this issue further is that different presentation rates are used -- behavioural thresholds change with rate (even though AEP, especially ABR, thresholds do not show such changes with rate). Finally, and very importantly, the physical calibration of non-continuous tones is a little more complicated; most clinicians and many researchers, and, indeed, many audiometric calibration technicians, do not know how to calibrate these stimuli.
My colleagues and I have been pretty adamant that individual clinics should not typically determine their own "nHL" values, as this requires large numbers of normal subjects, careful threshold procedures, and quiet test rooms. Rather, assuming they are using stimuli essentially identical to others' studies which investigated nHLs (or a "standard" such as the ISO standard), they should have their stimuli carefully calibrated acoustically to the previously published nHL calibration standard.
In our previous studies, we determined nHL calibrations for all stimuli using a 10/s stimulus rate, regardless of the actual rate used to record the AEP in the study (we did this because the ABR and MLR do not show better thresholds with increasing rates, whereas behavioural thresholds do). The ISO standard mentioned above by Stig Arlinger states that a 20/s rate should be used when determining nHL calibrations -- I am not sure why they selected 20/s, but the difference in threshold would be very small (only 1-2 dB) (Stapells et al., JASA, 1982).
The above confusing situation is made worse by the fact that there is not yet agreement as to formal RETSPLs/RETFLs (i.e., "standards")  even for the most commonly used stimuli for ABR testing. As Stig Arlinger has noted above, there is an ISO standard for brief stimuli, specifically clicks and brief-tone stimuli. However, these "standards" are not fully accepted, especially for tonal stimuli (e.g., they are not commonly used in North America). They tended to ignore or discredit substantial preceding research into nHL thresholds and, in the case of tonal nHLs, they have some significant differences. (Oddly, the reference force levels for bone conduction in these "standards" were estimated from pure-tone standards, rather than directly determined.) 
The move towards  "standard" reference thresholds is important, but it must also take into account previous research and consider differences. My greatest concern is that the large majority of research into adult and, importantly,  infant tone-ABR thresholds has been carried-out using nHL values that are different from the ISO standard -- any possible move to a different standard must consider how this relates to the results (and interpretation) of the past and future studies.
Problems will still remain even if a standard becomes widely accepted worldwide. Every time a new or different stimulus comes out (e.g., chirp stimuli), no standard will exist, and some sort of nHL needed.
Lately, I have wondered if we should never have moved to developing "nHL". Rather, perhaps we should have just used well-established pure-tone "HL" calibrations, and then determined what are the normal ABR (or MLR or ASSR) offsets/corrections. (This would eliminate problems with calibration, but would still entail questions with determining the offsets/corrections.) This is essentially what most researchers carrying out threshold assessments using the ASSR or CAEP (cortical auditory evoked potential) have resorted to.
Given the current situation, although not all would agree with me, I recommend one use the nHL calibrations from the study(ies) one is trying to emulate. For example, if you are testing infant ABR thresholds using the stimuli and parameters I have recommended, then you would use the nHL calibrations I have published. On the other had, if you are using Michael Gorga's stimuli and parameters, then use his published calibrations. I expect, with time, there will be more movement towards use of standard reference thresholds as we understand/explain current differences.
  • asked a question related to Hearing Research
Question
11 answers
This question aims to point out the difficulties that hearing impaired face in social life, in despite all the efforts made and different approaches taken to help them. Emphasizing the necessity of self-reliance of the hearing-impaired in social life and their substantial participation in the sustainable development of community, this question introduces.
Relevant answer
Answer
Description of normality is difficult item in all scientific branches. I like description of normal in geometry to explain "what normality concept means" to the students:
"In geometry, a normal is an object such as a line or vector that is perpendicular to a given object."
That means, first you need a "reference" to describe normal; that means it could be subjective according to your reference, or to put it other words, it may be "normal" according to only one view of angle. 
So I agree with Ms Garcia that, in human sciences, we have to completely avoid the word "normal", and also "the handicapped". 
  • asked a question related to Hearing Research
Question
5 answers
I am researching the correlation of development indices, e.g., HDI, IHDI and Gini, and percentage of Deaf population in total population.
Relevant answer
Answer
Unfortunately, formal data in our country also presents not directly the deaf (who are users of sign language and lives in a habitat centralized at/around sign language) but all with hearing problems in any ages. 
I published a few studies to find out the data of real "deaf " population in Turkey for a few years in Turkish. Only one which is also available in ResearchGate (The history of sign language and deaf education in Turkey) is in English. But this paper is mostly related with problems of the deaf in Turkey and also their history in and around Anatolia. None of development indices, but their current problems in education (schooling, special education, university education, etc) were mentioned.
We (me and Pinar Yaprak Kemaloglu) have some other works which are already presented as only conference papers about "social institutions"  (available in ResearchGate: An Investigation of the Social Institutions Regarding Deaf Citizens in Turkiye), and "sports". And some data about problems and proposals in higher education settings for the deaf young which was taken from "E-işit" project, (supported by the Worldbank a few years ago) is also present in ResearchGate as a conference paper. Currently I published another paper about the necessity of sign language in medical services (Dysability, Otorhinolaryngologic Practice and Sign Language) which is also Turkish.
Recently a book is prepared by some researchers in Turkey (but not published yet; editor: E Arik), which is about sign language researches in Turkey, and in this book not only me but also some other researchers pointed out many aspects of Turkish deaf society.
Now I am about to set a research on medical problems and health status of the deaf, which will be done in sign language through the deaf societies in Turkey. 
  • asked a question related to Hearing Research
Question
18 answers
What does the term "contralateral reflex" with respect to the right ear in the case of acoustic stapedial measurements in routine clinical situations mean?
Since there are two different views regarding which ear is to be the stimulus ear and probe ear, please specify.
And while testing reflex decay in the right ear, which contralateral reflex threshold is to be taken?
Relevant answer
Answer
There two schools on measuring Acoustic Reflex in Contra-lateral ear.  One is contra with reference to Probe ear (Right Contra : Probe in Left ear and Stimulus in Right Ear). Some Other consider contra with reference to stimulus ear (i.e Right Contra: Stimulus Presented in Left ear). Usually what specified in books is contra with reference to Probe Ear. 
Vijay
  • asked a question related to Hearing Research
Question
10 answers
I am looking to construct a similar [speech banana] plot on an audiogram for counseling, but would like the publication based data for the plot such as frequency and intensity ranges for consonants and vowels at a 'normal' conversation level.
Relevant answer
Answer
This is an important fact to know as all of us are aware of the speech banana but most of us are unaware of how much is the absolute average values of speech sounds. The best way to understand that is using LTASS measures and understanding the amplitude and frequency of speech sounds used in the Speech spectrum. 
An excellent review article which explains about the origin and also provides us with the values regarding the speech spectrum is published in Ear and Hearing by Olsen, Hawkins and Van Tassel (1987). Please find the link.
Hope this information is useful
Regards,
Prashanth
  • asked a question related to Hearing Research
Question
1 answer
I was wondering if anyone had done (or knows of) research into inferential learning in cochlear implantees. My current understanding is that most literature says deaf children (which also implies adults) cannot overhear conversations which they are not a direct participant in, and so miss out on information which hearing people have access to. Most of the literature I've seen is from the educational fields, and so they suggest methods of educating to help avoid the impact of not being able to do this. Now, through personal experience I have met some young children who were implanted early enough that they can 'overhear'. Also, as a recent (5 years ago) implantee myself, I'm beginning to find I can do this sometimes, but its more of a 'cocktail party effect' than true 'overhearing'. So the words/sentences have to be very salient or obvious in a linguistic way (i.e. no other possibilities).
I am wondering whether this is an ability that could be trained, and if so how would that even be attempted? I see some parallels with divided attention topics over in cognitive psychology, and I am thinking about this being my topic for my MSc in 2014, but wondering if it might be a wee bit too large a scope. I am not even sure how to even measure it at this point.
Relevant answer
Answer
You should do it, it would be an excellent topic for your MSc. As for measuring it, perhaps set up something like a set dialogue with a speaker talking to another person on the telephone and seeing how much of the conversation the implantee can pick up? That's how I "train" myself for overhearing, I just eavesdrop on other family members telephone conversations and try to make out as much as I can.
  • asked a question related to Hearing Research
Question
20 answers
For example, in spaces with different temperatures:
Sound speed at 10 degrees Celsius and 50% relative humidity = 337 m/s.
Sound speed at 40 degrees Celsius and 50% relative humidity = 356 m/s.
If we have a calibrated objects vibrating at 1000 cycles per second in the cool room and the hot room, the wavelengths are 33.7 cm and 35.6 cm respectively. If sound speed was a constant 343 m/s, these wavelengths would equate to frequencies of 1017.8 and 960.8 Hz, definitely a perceivable difference. However, since the temperature differs, these different wavelengths both equate to 1000 Hz at the ear. If we perceive pitch from frequency, then these conditions will be heard as the same, but if we perceive pitch from wavelength, they will be heard differently. It seems that perception of wavelength would necessarily be binaural, since at one ear, coding is only frequency dependent.
Can we hear the difference between the same tone in a cold space and a hot space?
Relevant answer
Answer
The sound source produces, in this example, 1000 periods per second, no more, no less, independent of the medium. The medium can only have influence on the propagation speed and as a consequence the wavelength, which is only a part of the distance between source and ear, depends on the medium as well. At the receiver side the eardrum vibrates according to the water (or air) pressure vibrations. The time of one period remains constant 1 ms. If you would hear, for example, a higher tone, where would all these extra periods come from? Not from the source, not from some 'memory' in the medium and not from alterations of the distance between source and eardrum (as would be the case of the Doppler effect). The only things that are dependent of the medium are the delay (which is a constant and which doesn't alter the frequency) and the spatial perception (which is a reverberation effect and also doesn't influence the frequency). So from the eardrum vibrations to the auditory nerves at the basilar membrane everything remains the same (even the temperature remains constant, so even the wavelength within the auditory system will not be dependent of the medium outside the ear). The time of one period remains exactly 1 ms in all circumstances because nothing produces extra periods within a certain time interval and nothing 'eats' periods. The tone will be exactly 1000Hz, in air and in water. You can test it with an (underwater) speaker and microphone if you like.
  • asked a question related to Hearing Research
Question
20 answers
The wavelengths of the same frequency are substantially different in air and water. Do humans utilize wavelength in auditory pitch perception? This would also apply to rooms/places with substantially different air temperatures, and has implications for understanding auditory localization.
Relevant answer
Answer
As said by people already: the medium only has influence on the propagation SPEED, not on the frequency of the source. The number of periods per second is defined by the source. The wavelength is dependent of the medium which follows from the constant frequency: wavelength = propagation speed / f. The sound waves in water activate the eardrums and we hear exact the same frequency as the source produces. An other story is the reverberation in water. Because the prop. speed in water is about 4 times as high as in air, the spatial impression is much smaller than the same space in air. Some people know that if you fill your lungs with helium, your speech will sound as Donald Duck then. That is caused by the fact that the sound SOURCE is altered by the higher prop. speed of helium in your throat and mouth cavities (the vocal tract). In this case the production of the sound is dependent of the gas in the vocal tract: the resonations (formants) depend on the gas present.
  • asked a question related to Hearing Research
Question
2 answers
Humans aren’t the only ones who lose their hearing as they grow older. Scientists report that wild Indo-Pacific humpback dolphins (Sousa chinensis), which can live 40-plus years, also have trouble picking up sounds as they age.
Relevant answer
Answer
As far as bottlenose dolphins go:
The following paper may help answer your question.
it also appears that male bottlenose dolphins can experience age-related hearing loss.
Dr. Darlene Ketten (now working at the Centre for Marine Science and Technology, Curtin university of Technology, Perth Australia) has done some work in this area and might be able to help further if you contact her directly.