Science topic

Auditory Neuroscience - Science topic

Explore the latest questions and answers in Auditory Neuroscience, and find Auditory Neuroscience experts.
Questions related to Auditory Neuroscience
  • asked a question related to Auditory Neuroscience
Question
1 answer
By studying articles related to rTMS, I realized that different frequencies have been used for stimulation to treat tinnitus, with the use of 1Hz being the most common.
So, the question arose for me on what basis the frequency of rTMS is selected.
Relevant answer
Answer
For the selection of frequency in case of rTMS for tinnitus it is necessary to stimulate the relevant cortex. If less frequency is selected it would not stimulate the specific cortex eg. Temporal lobe as because the relative thickness of the temporal bone is stimulated by frequency around 1Hz.
And if we set a higher frequency it would overstimulate the cortex. That will produce more tinnitus.
So traditionally 1Hz is used with a variable result. But research may be take place to reduce the variability in outcome whether this explanation is correct or not.
  • asked a question related to Auditory Neuroscience
Question
2 answers
When finding the detection threshold, is it easier or faster in any of the cases? Is there a difference in the steepness of the psychometric curve? In my case, I am looking at how different conditions influence the detection accuracy. I'm flexible in the choice of method.
Thanks!
Relevant answer
Answer
By embedding a tone in noise, you can reduce (unnecessary) effects of external and internal noise. There is always external noise in your measurement environment, and it is difficult to completely remove it. There is always internal noise, coming from inside a listener. You can control them by adding noise of sufficient intensity. In theory, the slope of psychometric function become shallower when the nose is added, since the slope reflects the noise variance.
  • asked a question related to Auditory Neuroscience
Question
3 answers
I want to know if there is a possibility for sensation of sounds by some kind of non-mechanoreceptors. any evidence in any organism can be useful.
Relevant answer
Answer
To my best knowledge the answer is NO.
Auditory processing or just response to air vibration always requiere a step based in mechanorreceptors, and a mechano-electricla transduction process.
  • asked a question related to Auditory Neuroscience
Question
10 answers
If the response evoked by sounds presented to both ears are subtracted from the sum of the responses evoked by sounds presented to the left and the right ear (L + R - B), the binaural interaction component is derived. It seems it is related to spatial localization.
An explanation of this derived BIC is (Pratt, 2011) :
L = activity of left monaural neurons + activity of binaural neurons
R = activity of right monaural neurons + activity of binaural neurons
B = activity of left monaural neurons + activity of right monaural neurons + activity of binaural neurons
L + R = activity of left monaural neurons + activity of right monaural neurons + 2 * activity of binaural neurons
BIC = L + R - B = activity of binaural neurons
[left monaural neuron: neuron that is capable of responding only to left ear stimulation
right monaural neuron: neuron that is capable of responding only to right right stimulation
binaural neuron: neuron that is capable of responding to stimulation at either ear]
I feel this explanation is oversimplified. Is there a more complicated but reasonable model of the BIC? Where and how is binaural interaction produced? Are there monaural neurons and binaural neurons at every level of auditory system (AN, CN, SOC, LL, IC, MGB)?
Relevant answer
Answer
I don't have a clear answer for the BIC question. However, regarding the latter part of the question about existence of binaural neurons at different levels int he auditory system, have you seen the Jefress Model ?
Regarding the BIC, in the scalp recorded AEP, I have written a manuscript related to it. If you are interested, I can send it to you.
  • asked a question related to Auditory Neuroscience
Question
2 answers
There are 2 ways of doing rinne test .which one is preferable and why? 
Relevant answer
Answer
In threshold comparison method, the subject has to decide when he stopped hearing. This may not be accurate many of the times or it may become difficult to execute in old, too young or less co-operative subjects.
On the other hand, loudness comparison is more straight forward in comparing which one is louder; thats all.
So, I think, this may be the reason why loudness comparison method is better than threshold comparison method.
  • asked a question related to Auditory Neuroscience
Question
11 answers
Does the knowledge of mathematics and acoustics help music creation? Dose it harmful to artist's soul and emotions?
Relevant answer
Answer
The knowledge of acoustics and mathematics is important for a musician.
Allow to know and explain a series of phenomena associated with the perception of the sound/music, musical performance and musical composition.
Depending on the type of musical composition, the mathematical knowledge can be associated to pure inspirations.
  • asked a question related to Auditory Neuroscience
Question
4 answers
I am now analyzing some event-related potential (ERP) data in the auditory modality. There were several stimulus sounds, and for every subject the ERPs were recorded under three experimental conditions: left-ear, right-ear, and binaural stimulation. I’d like to see whether the brain activities underlying an ERP component of interest show lateralization to one side as the stimulus sound varies. (e.g. left-lateralized activities for linguistic sounds).
By visual inspection, the ERP topographies under binaural stimulation seem symmetrical according to the central line for some stimulus sounds but lateralized to one side for others. I am wondering whether there is a quantitative measure of topographic lateralization.
Also, I am wondering, if I only had the data under left-ear and right-ear monaural stimulation, would it be possible for me to assess the lateralization introduced by stimulus features? One problem I’m worried about is that the lateralization in the topographies under the conditions where stimuli were presented to one side may be attributed to the monaural stimulation instead of to stimulus feature.
Thank you.
Relevant answer
Answer
@ Stephen Politzer-Ahles  Thank you very much for great advice.
  • asked a question related to Auditory Neuroscience
Question
6 answers
Does anyone know of any sources to check the relative frequency of various consonant places of articulation in word-initial position in English (or any other language)?
In other words, what percentage of word-initial consonants in English are coronal, labial, dorsal, etc.?
Relevant answer
Answer
If you were able to find a database of English words in IPA and import that into SIL's Phonology Assistant program (https://www.sil.org/resources/software_fonts/phonology-assistant, its free) you could easily answer your question and also look at it from multiple angles. 
If you can't find a database in IPA, you could import the CMU corpus above, but you would have to define the phonological features of each of the graphemes and digraphemes. If would be a little bit of work, but not too much. You could then easily compare your results against a token frequency list such as found at (http://www.wordfrequency.info/)
  • asked a question related to Auditory Neuroscience
Question
8 answers
Hello.
I'm setting up auditory fear conditioning, and I wonder how I can measure decibel of a tone for conditioned stimulus. I want a 75-dB tone and have a decibel meter.
I am not sure where I need to place the decibel meter in the context
to adjust a 75-dB tone. near the speaker? on the bottom? in the middle? The speaker is on the right wall of a square shaped context and if I want to proceed fear extinction in a different octagon-shaped context, I need to adjust the tone again for the new context, right? In this case, where do I place the decibel meter ?
Thanks for reading and I'll be waiting for your tips.
Relevant answer
Answer
It would help to know more about your equipment, e.g., whether your microphone system has a probe tube on it. In essence you want to put the microphone at the position of the eardrum, which can be done with a probe tube and an anesthetized animal. Otherwise measure with the microphone at the position where the animal's eardrum(s) will be during the procedure. You also need to be sure that you are using a free-field calibrated microphone. If you change the setup you need to re-check the calibration. Make sure the space around the animal is free of hard, sound-reflecting surfaces or cover such things with soft cloth.
  • asked a question related to Auditory Neuroscience
Question
1 answer
I am working on mouse brain coronal section. I am doing a quantification of activated cells in several areas by cFos staining. I am having a hard time doing it on the auditory cortex. I wonder If someone knows any auditory cortex marker for a double staining  that I might use in a double staining or any other way? 
Thank you for your time!! :)
Relevant answer
Answer
i don't have the answer, just some tracks. Perhaps you should look at the works of Zilles' team in humans, like a staining some receptors or on myelin (weigert?) could work (primary auditory cortex is highly myelinated). From Moerel et al. (2014) : "Zilles et al. (2002)  mapped the human cortex based on multiple transmitter receptors (Zilles et al., 2002; Morosan et al., 2005). They found that human PAC contained a high density of cholinergic muscarinic M2 and nicotinic receptions, most densely expressed in middle cortical layers. Both M2 and nicotinic receptor density sharply dropped at the lateral border of PAC with the belt (Clarke and Morosan, 2012)."
Another track would be to look at studies working on the organization of auditory cortical fields in rodents, they should use some staining of interest for you.
  • asked a question related to Auditory Neuroscience
Question
7 answers
I am writing my dissertation on the correlation between sound design and viewers' sense of discomfort during tension-building scenes in different film genres and therefore need to examine the psychophysical principles and theories which relate to it. Can anyone point me in the right direction?
Relevant answer
Answer
Catherine Best, Catherine Stevens, Rick Van der Zwan (combines visual and auditory psychophysics). Just to name a few.
  • asked a question related to Auditory Neuroscience
Question
1 answer
Can pressure induce electrical effects like depolarization? I'm thinking about whether unborn can process sound information by acoustic pressure-pulses coupled to voltage-pulses, propagated by gap-junctions. What do you think?
Relevant answer
Answer
definitely it has effect on cells and also it matter what quantity of decibels (ultrasound waves can affect the growth of cells)
  • asked a question related to Auditory Neuroscience
Question
7 answers
Mainly from an auditory neuroscience / psycho-acoustics perspective.
Relevant answer
Answer
I use the following example in my teaching (mainly to demonstrate how awesome human hearing is): when you walk down a street and you hear the sound of (for example) an acoustic guitar being played (not too badly) you are generally able to hear whether someone is playing an actual guitar or playing back a recording of a guitar for the very reason Dr. Mannis suggests. Our psychoacoustic systems pick up so many precise directional cues that we can easily identify the predictable response of a loudspeaker in a certain acoustic condition (a park, a room with an open window etc). I imagine we might even be able to make out the difference between an electric guitar amplifier and a recording, taking the specific sound of such speakers in consideration.
Technically I assume you could analyse an experimental recording with a microphone array and look at the distribution of direction/frequency; if it's complex it's an instrument or a human or an animal (or a tree falling in a forrest) if it's relatively simply (i.e. all higher frequency in the same direction) it would likely be coming from a loudspeaker.
Would make a fun research project :-)
  • asked a question related to Auditory Neuroscience
Question
1 answer
If you can't physically be in the environment you're learning about, would listening to it's sound recordings aid your learning?
Relevant answer
Answer
This is 2016. If it is important that you "experience" the environment and can't be there, at least be there virtually in both the auditory and visual domains. You can equip a manikin with microphones for binaural sound recording and with a camera for binocular video recording and then play them or stream them so that you can experience the environment for the duration of the recording or stream from the reference pointof the manikin. How tht would add to your learning about the environment would depend upon what you need to know. 
  • asked a question related to Auditory Neuroscience
Question
9 answers
I have read many papers and consult several books but one piece of information cannot be found. Imagine normal human ear is exposed to 1 kHz sine sound that causes normal audible loudness of say 40 db. What kind of electrical signals does cochlear send to the brain? If these are electric pulses all of the same shape, height and width, does their shape, height or width relate to the intensity of sound waves and how? If not, what does?
Relevant answer
Answer
Mario,
In the firing for shorter time periods, the rate varies stochastically, even if an auditory fibre is phase locked to a high degree (perfect phase locking is not observed, but vector strengths can be very high, meaning that the vast majority of intervals are locked to the phase of the stimulus). So it really is a question of what you mean by constant. The most exact spacing in time is only achieved during phase locking to high-level sounds at low frequencies – that is frequencies below the maximum firing rate of the fibre. Above that frequency, some of the cycles of the stimulus will be “missed”, i.e. the fibre will not be able to fire again that quickly. However, the spacing of the next firing will still be closely related to the phase (one, two, three cycles, etc). There is no minimum firing rate of auditory neurons in the cochlea – all of them have spontaneous activity that can be from below 1 spike/s up to about 100/s. During phase locking, such spontaneous firings become “locked” to the sound stimulus. The maximum firing rate of primary auditory neurons is somewhat higher than 300/second, pretty much irrespective of the best-response sound frequency. This maximum rate is not restricted to the auditory system but is true for all neurons of endothermic vertebrates. The longer the sound pulse, the more the rate will fall over time, often reaching a lower plateau rate after about 50 -100 ms (this is called adaptation). As I wrote before, it is likely that at low frequencies, the brain uses the spacing of the firings to derive the frequency of the sound (originally called the “volley theory”). Above frequencies where the spacing between firings no longer contains this information, i.e., above perhaps 1-2 kHz in humans (this is a guess), the information concerning the frequency of the sound is only conveyed by the place of origin of the fibres in the cochlea (tonotopic organization). What a firing neuron sounds like (the one in this video has a very variable rate and is not, I think, an auditory neuron) can be heard at:
I am not aware of a web site that offers a real sound recording of cochlear neurons firing, but some firing patterns are shown, for example in:
Geoff Manley
  • asked a question related to Auditory Neuroscience
Question
10 answers
There are a range of management options, which is most effective?
Relevant answer
Answer
Are there any fMRI studies on KKS?
  • asked a question related to Auditory Neuroscience
Question
2 answers
Hi, I am trying to generate dynamic moving ripple to probe spectro temporal receptive field in rodent primary auditory cortex, but I would like to confine the parameter space in terms of the range of ripple density and ripple velocity. Most of the literature I find are on cats and ferrets, so I am wondering if anyone has any experience with rodents. Typical maximum ripple density in ferret paper is around 1.5 cycles per octave.
Thanks!
Ji
Relevant answer
Answer
For temporal rates: Christianson, Sahani and Linden J. Neurosci. 2011. For ripple density - tuning curves in rats and mice are typically wider than in ferrets, and I wouldn't expect the need to use denser ripples than in the ferret.
  • asked a question related to Auditory Neuroscience
Question
9 answers
We know about the several causes for Tullio's phenomenon. I would like to the treatment strategy for the same with any Clinical and research experiences?
Relevant answer
Answer
While the causes of Tullio phenomenon are diverse, I believe the commonest we encounter here in sub-saharan Africa is due to Superior canal dehiscience. I agree with Eduardo Martin-Sqnz that surgical options is only beneficial with large dehiscience.
  • asked a question related to Auditory Neuroscience
Question
14 answers
What is the advantage of using dBnHL over dBpeSPL. The description of dBnHL, I understand that it is calculated by taking the difference between dB peSPL and behavioral threshold @ one repetition rate. If we are calculating at one rate how has this value been generalized for other reputation rate (30.1/sec, 90.1/sec). From the psycho-acoustics it is understood that behavioral threshold is better at higher rate (90.1/sec) than lower rate. Are there any standards which specify which rate should be used and why.
Relevant answer
Answer
This is an interesting question. I have been "involved" with this topic since the 1908s, thus my answer is somewhat long, and certainly reflects my personal opinion.  However, any answer to this question is not without controversy.
As noted above, the primary reason for using "dBnHL" is to account for the differences between behavioural thresholds at each frequency, so that "0 dBnHL" represents the median or mean threshold for normal adults. This is the same concept as "dB HL", which uses (nearly) continuous tones. As also noted above, however, in contrast with behavioural HL, even with the "nHL" correction, one still has to apply "eHL" correction factors reflecting  differences between AEP (usually ABR) thresholds and behavioural thresholds; correction factors which are different for infants compared to adults.
Unfortunately, it is not so simple as dBHL, because "dBnHL" has had to be calculated for so many different stimuli (e.g., clicks for ABR, 5-cycle brief tones for ABR, chirp stimuli for ABR or ASSR, 40-60-ms tones for corticals). Complicating this issue further is that different presentation rates are used -- behavioural thresholds change with rate (even though AEP, especially ABR, thresholds do not show such changes with rate). Finally, and very importantly, the physical calibration of non-continuous tones is a little more complicated; most clinicians and many researchers, and, indeed, many audiometric calibration technicians, do not know how to calibrate these stimuli.
My colleagues and I have been pretty adamant that individual clinics should not typically determine their own "nHL" values, as this requires large numbers of normal subjects, careful threshold procedures, and quiet test rooms. Rather, assuming they are using stimuli essentially identical to others' studies which investigated nHLs (or a "standard" such as the ISO standard), they should have their stimuli carefully calibrated acoustically to the previously published nHL calibration standard.
In our previous studies, we determined nHL calibrations for all stimuli using a 10/s stimulus rate, regardless of the actual rate used to record the AEP in the study (we did this because the ABR and MLR do not show better thresholds with increasing rates, whereas behavioural thresholds do). The ISO standard mentioned above by Stig Arlinger states that a 20/s rate should be used when determining nHL calibrations -- I am not sure why they selected 20/s, but the difference in threshold would be very small (only 1-2 dB) (Stapells et al., JASA, 1982).
The above confusing situation is made worse by the fact that there is not yet agreement as to formal RETSPLs/RETFLs (i.e., "standards")  even for the most commonly used stimuli for ABR testing. As Stig Arlinger has noted above, there is an ISO standard for brief stimuli, specifically clicks and brief-tone stimuli. However, these "standards" are not fully accepted, especially for tonal stimuli (e.g., they are not commonly used in North America). They tended to ignore or discredit substantial preceding research into nHL thresholds and, in the case of tonal nHLs, they have some significant differences. (Oddly, the reference force levels for bone conduction in these "standards" were estimated from pure-tone standards, rather than directly determined.) 
The move towards  "standard" reference thresholds is important, but it must also take into account previous research and consider differences. My greatest concern is that the large majority of research into adult and, importantly,  infant tone-ABR thresholds has been carried-out using nHL values that are different from the ISO standard -- any possible move to a different standard must consider how this relates to the results (and interpretation) of the past and future studies.
Problems will still remain even if a standard becomes widely accepted worldwide. Every time a new or different stimulus comes out (e.g., chirp stimuli), no standard will exist, and some sort of nHL needed.
Lately, I have wondered if we should never have moved to developing "nHL". Rather, perhaps we should have just used well-established pure-tone "HL" calibrations, and then determined what are the normal ABR (or MLR or ASSR) offsets/corrections. (This would eliminate problems with calibration, but would still entail questions with determining the offsets/corrections.) This is essentially what most researchers carrying out threshold assessments using the ASSR or CAEP (cortical auditory evoked potential) have resorted to.
Given the current situation, although not all would agree with me, I recommend one use the nHL calibrations from the study(ies) one is trying to emulate. For example, if you are testing infant ABR thresholds using the stimuli and parameters I have recommended, then you would use the nHL calibrations I have published. On the other had, if you are using Michael Gorga's stimuli and parameters, then use his published calibrations. I expect, with time, there will be more movement towards use of standard reference thresholds as we understand/explain current differences.
  • asked a question related to Auditory Neuroscience
Question
9 answers
I'm looking for recent theories & relevant evidence for both my MSc research project and a summative essay. Many thanks!
Relevant answer
Answer
Hello Jenn - You can search for keywords like "entrainment" or "groove".
You will find references to a large body of research in the following (very recent: November 2014) paper:
Burger, B., Thompson, M. R., Luck, G., Saarikallio, S. H., & Toiviainen, P. (2014). Hunting for the beat in the body: on period and phase locking in music-induced movement. Frontiers in Human Neuroscience, 8. doi:10.3389/fnhum.2014.00903
Best, Olivier
  • asked a question related to Auditory Neuroscience
Question
13 answers
By this I mean, sensory deprivation causing re-adaption of brain areas, or injury to motor areas meaning therapy for "re-mapping" is another example (work of VS Ramachandran for example)
I am wondering if there was a link between visual and hearing areas? I've seen good theoretical work, and evidence when considering just the one cortex, but is there a cross-over when looking at both in the same study?
Relevant answer
Answer
please go to the research of Rick van Dijk in Plos one: Haptic spatial configuration in Deaf and Hearing individuals (april 2013)Full tekst on line\and from the same author: Superir touch : improved haptic orientation preocessing in deaf individuals. Experimetnal brain research October 2013
  • asked a question related to Auditory Neuroscience
Question
2 answers
I usually perform the intracerebroventricular injection in mice ventricle with coordinates of -0.3mm anteroposterior, + or - 1 mm mediolateral and -3 mm dorsoventral. Similarly could you please explain which coordinates best feed to retrieve the CSF from mouse brain?
Relevant answer
Answer
You don't need to use an invasive technique to extract CSF. It can be done easily from the cisterna magna at the base of the skull. Check out this video: https://www.youtube.com/watch?v=45hq3oatvy4
  • asked a question related to Auditory Neuroscience
Question
33 answers
I am looking for journals to cite on the POSITIVE EFFECTS of music on any of these broad areas: brain development, coordination, spatial IQ, cognitive IQ, overcoming learning disabilities, overcoming neurological delays, increased chances of going to college. It is fine if the source is a recent or old journal. Please provide links, thanks.
(When I looked in RG, there was one, but it's still at an accepted article stage.)
Relevant answer
Answer
Music is present in all cultures since prehistoric times, but still it is not clear what the source of gratification that we feel listening. Two newly published studies now contribute to shed light on the brain mechanisms involved in the joy of music.
As you can read in "Science", Valorie N. Salimpoor and colleagues at the Montreal Neurological Institute of McGill University have analyzed the neural processes of volunteers who listened to the first few songs. To give way to the experimenters to assess the degree of pleasure evoked by the music, the subjects participated in a kind of auction in which they could make an offer to listen to a particular song.
"Viewing the activity of a particular brain area, the nucleus accumbens, which is involved in reward, it was possible to reliably predict whether subjects would have offered money to listen to a song," says Salimpoor. The involvement of the nucleus accumbens confirms recent indications of the fact that the emotional effect of music would activate mechanisms of expectation and anticipation of a stimulus desirable, mediated by the neurotransmitter dopamine when it comes to a song already familiar, the mechanism of leave would be evoked mental anticipation of the passages most enjoyable. In the search for Salimpoor colleagues, however, the music was not known, but functional magnetic resonance imaging showed that the activated areas and dopaminergic mediation were the same as those of well-known songs. The cause, according to the researchers, is an "implicit knowledge" of music, obtained over the years and internalize the structure of the music characteristic of a certain culture.
The activity of the nucleus accumbens, also, is not isolated, but also involves the auditory cortex, which stores information on the sounds and the music during the test, as the piece was rewarding, the more intense was the cross-communication between the different brain regions. This result supports the idea that the ability to appreciate music refers not only to the emotional aspects, but also on assessments of cognitive character.
Still on the subject of brain reactions to music, Vinod Menon and colleagues at Stanford University School of Medicine, authors of an article published in the "European Journal of Neuroscience," have shown that listening to classical music evokes a unique pattern of activation of areas of the brain in spite of the differences between people.
The team recorded the activation of different brain regions of volunteers who listened to the music of William Boyce, an English composer of the eighteenth century, or pieces of "pseudo-music", ie sequences of auditory stimuli obtained by altering the songs Boyce with appropriate algorithms by the computer. The reaserchers identified a distributed network of brain structures whose activity levels followed a similar pattern in all subjects while listening to music, but not in that of the pseudo-music.
"In our study we have shown for the first time that, despite individual differences, classical music evokes in subjects other than one very consistent pattern of activity in various structures of the frontoparietal cortex, including those involved in the planning of movement, memory and attention, "says Menon. These regions, in particular, participated each with its own activation rate to the development of what was being heard, helping to make sense, with its own specific contribution to the overall structure of the music.
Particularly curious is the preferential activation of the centers of motor planning in response to the music but not the pseudo-music: according to the authors, it is a "neural correlate" of the spontaneous tendency to accompany listening to music with body movements, as in the dance, or simply clapping his hands.
I red with particular interest these two papers which well summarize the mucic/brain interaction. And let me understand what is happening when I am listening to a favourite music. I do not know whether similar mechanisms are activated when I enjoy to  write, play and sing my songs. Many of you know that this is one of my hobbies.
  • asked a question related to Auditory Neuroscience
Question
7 answers
Some patients have reported a shift in auditory input pitch while doing heavy physical activity. The shift only occurs for a second or so. Could it be influenced by increased pressure, decreased bloodflow, or more intrestingly by the brain?
Relevant answer
Answer
Change pitch perception by will? Yes! Try this: turn on a sinewave generator, then bite your teeth quite hard together. The loudness changes, and (likely due to that), the perceived pitch. You can also shake your head to create a nice pitch-vibrato (Doppler effect).
  • asked a question related to Auditory Neuroscience
Question
3 answers
I am planning an experiment about speech perception in hearing aids patients (a new field for me) and want to know about general approaches and strategies in hearing aids.
Relevant answer
Answer
It is very difficult to get commercially available linear hearing aids. Phonak still offers a limited number but the stock dates back many years. Their current product line along with Siemens, Oticon, Widex, Starkey, Unitron and Bernafon do not include a linear hearing aid. Some hearing aids can be programmed to be linear in gain, Starkey and Siemens and Phonak but sound cleaning tools will still be active.
personally I like
"Hearing Instrument Technology for the Hearing Health Care Professional by Andi Vonlanthen and Horst Arndt" as a very good overview of hearing aid technology and fitting strategies. However if you are looking at the different fitting strategies, i also suggest you read the how they were derived. I have been less than impressed by the methodology utilized by some.
  • asked a question related to Auditory Neuroscience
Question
8 answers
I'm trying to understand how they work generally, no need for details:
- rate code = position in cochlear / on basiliar membrane with highest sensitivity for frequency (correct me if I got it wrong)
- temporal code / volley theory = unknown (neuron fire rate)?
- ensemble code = no idea
PS: What does phase locking mean in this context?
Relevant answer
Answer
The only thing I found is the influence of radii ratio: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2299218/
It's really interesting, but as human's cochlear doesn't vary in radii ratio that much, this should be neglectable.
  • asked a question related to Auditory Neuroscience
Question
2 answers
Is there a simple way to describe these as strategies?
What happens (birds / mammals) when hearing? What does each graph on the right side say?
Relevant answer
Answer
Daniel,
The bird system is simply a delay line. As the conduction speed of signals in nerves is low, much lower than the speed of electric signals in cables, a series of coincidence detectors as illustrated in the figure can code for interaural delay. The neurons in the delay line will only fire if a signal arrives at the same time from both ears, an by making the nerve fibre from one ear longer than the finbre from the other ear, the cell will fire at a preferred delay. Each coincidence detecting neuron has its own tuning curve (sinusoidal curves in b) and so cover the entire range of interaural delays.
In mammals, all delay-sensitive neurons are tuned with maximum sensitivity to delays outside the maximal possible delay between the ears and the interaural delay is thus directly coded in the level of response (straight part of curve in the purple part of the figure in d).
Coincidentally, the leading expert on this (and author of the paper) Benedict Grothe, happens to sit right across town from you!
Best regards
Jakob
  • asked a question related to Auditory Neuroscience
Question
10 answers
What is the best software to produce sounds from scratch with sequences/pulses of different frequencies and intensities, for a playback study?
Relevant answer
Answer
I think the search terms you are looking for are "Granular Synthesis". You should find a lot of software to produce sounds for bioacoustics using this technique.
  • asked a question related to Auditory Neuroscience
Question
9 answers
How does the instant availability of any kind of music have an impact on human productivity, social mentality, and understanding of one's self?...
Relevant answer
Answer
I've seen people listening to verbose music sing along during a work day and go about task mindlessly...this music may be good for work that is chorish as opposed to detailed and mentally demanding.
I prefer classical which can by lyrical, melodic, emotional, and a whole spectrum of depth. I find it helps with stress and change the pace of the work I do.
I think listening to music gives people a sense of more control of their environments, where we are no longer surrounded by the banter of others or the noises of our worlds. We are more disconnected with the reality of the proximal area and more connected with the dilution of introspection. OR just using it as beat to work to.
Much of the music has lost its sense of creativity and now thrives on the momentarily catchiness of a meme. Much of the more creative music thrives on memes but they compliment it with tempo and volume irregularities. So, which music we see catching on more may signify whether or not we are more impulsive or contemplative.
I am biased in my views but I'd love to see studies looking into patterns of behavior and musical preference. Are your more rebellious listening to Rage against the Machine, Bob Marley, etc or do they listen to Top 40, or classical? Is there a difference between those that lead and those that follow and those that organize? There is going to be a great deal of personal bias,but as recently done, musical preference can be correlated to SAT scores (http://musicthatmakesyoudumb.virgil.gr/)