Questions related to Auditory Perception
When finding the detection threshold, is it easier or faster in any of the cases? Is there a difference in the steepness of the psychometric curve? In my case, I am looking at how different conditions influence the detection accuracy. I'm flexible in the choice of method.
I have run a study on auditory perception with a sample of children with ASD. Given the high rates of comorbid diagnoses in individuals with ASD, I have some participants with multiple diagnoses (e.g., ADHD, anxiety, speech-language impairment). Is there a way I can control for these comorbid diagnoses so I do not have to exclude these participants? Is there a way to use secondary diagnosis as a covariate? Could I run the analysis with and without the participants who have additional diagnoses? Thank you in advance for any feedback.
I would like to know if there are studies that investigated how long primary-school children are able to concentrate on a listening task. Are there official recommendations for a maximum task length?
It is certainly a long shot but...
For a PhD project on speech perception, we are looking for native Dutch listeners to participate in a short online auditory perception test (10min). So far, we have only found 30 listeners. Does anybody know any Dutch network we could contact to try increase our listeners sample or any Dutch list we could forward our test?
Many thanks in advance!
I am on the research for studies that investigate speaker normalization in children. For example, I wonder whether children around the age of six years can already normalize acoustic differences between speakers as well as adults. Any suggestions for literature on this topic?
Looking forward to reading your suggestions.
This is mainly addressed to biologists, especially those who are studying sensory perception in animals and/or human beings. Also, people from other areas, such as medicine or engineering, who have turned their attention to neuroscience are welcome to contribute. I want to know how our system extracts pertinent features such as frequency selective transduction in the cochlea by the hair cells and orientation selective neurons in the brain. What other such processing is known, that perform dedicated feature extraction from the sensory inputs ?
I am beginning an experiment assessing timing-related behavior in adults with ADHD and the perceptual measures I plan to use are adaptive, and determine perceptual thresholds using standard adaptive algorithm procedures (e.g. staircase method). However, I'm concerned about the inevitable impact of attentional lapses on thresholds. I am interested in suggestions for how best to tune the staircase parameters and/or suggestions for other adaptive algorithms that may be more resilient to lapses of attention. Any thoughts?
A patient suffers of a tinnitus phenomenon. She complains about a causal dependance between her tinnitus and some specific movement of the homolateral eye !
I know that eardrum is made of collagen(type II)as well as some parts of the eye, but it is difficult to guess an explanation for connecting these two remarks...
Have you any idea about that ?
Is there a "sound density limit" beyond which sound energy fails to be recorded and/or played back?
Example: one of the largest known choirs consisted of 121,440 people - if I wanted to record such an event (or as many overdubs as those or many more), would there be a density limit I would reach and if yes, how can it be calculated ?
What about natural events ? Imagine hailstroms for example. What would happen if I recorded many and created a sound file with dozens, even hundreds of those and played them back ? Would I be reaching any playback (or hearing) limits ? Would such density create some sort of coloured noise ?
All your ideas, suggestions and explorations will be very welcome
In perceptual listening tests, subjects have to listen to sound examples and rate their sound quality or other characteristics.
As these tests can be quite long, a serious and practically relevant question is if participants change their rating behaviour over time, maybe because the prolonged concentration while listening and rating leads to fatigue, or subjects adapt to the stimuli in some way.
Do you know of any studies or publications that treat this question, whether subjects rate stimuli differently depending on whether they're presented at the beginning of the test or at the end?
I'm setting up auditory fear conditioning, and I wonder how I can measure decibel of a tone for conditioned stimulus. I want a 75-dB tone and have a decibel meter.
I am not sure where I need to place the decibel meter in the context
to adjust a 75-dB tone. near the speaker? on the bottom? in the middle? The speaker is on the right wall of a square shaped context and if I want to proceed fear extinction in a different octagon-shaped context, I need to adjust the tone again for the new context, right? In this case, where do I place the decibel meter ?
Thanks for reading and I'll be waiting for your tips.
I need to know how many layers of neurons are involved in the propagation of sound information from ear to brain (i.e., auditory pathway).
I am also interested if there is any biological evidence on the kind of connectionism. For example, a feedforward network could represent well the auditory pathway? How many neurons per layer are typically considered? Are present feedback loops? (if so, what kind of feedback? Inhibitory or excitatory?).
Thanks in advance for your attention.
I have read many papers and consult several books but one piece of information cannot be found. Imagine normal human ear is exposed to 1 kHz sine sound that causes normal audible loudness of say 40 db. What kind of electrical signals does cochlear send to the brain? If these are electric pulses all of the same shape, height and width, does their shape, height or width relate to the intensity of sound waves and how? If not, what does?
European languages have extensive vocabulary for visual imagery and metaphors. Do any other systems have more emphasis on auditory perception?
"The European vocabulary of intellectual inquiry has not developed a strong vocabulary for the study of soundscapes and auditory culture. In a literature full of perspectives, overviews, outlooks, viewpoints, standpoints, aspects, prospects, panoramas, maps and microscopic examinations, where are the clicks, crunches, reverberations and resonances? What languages, what philosophical traditions need to be searched for appropriate terms and concepts so that the vocabulary of acoustemology – knowledge of the world through sound -- can grow? What cultural traditions can speak to us about hearing the world?" (M.J. Epstein)
Hello everyone! I would like to hear any suggestions you might have regarding the equipment I can use for Blind Speaker Separation experiments in real rooms (real time as well as offline). Has any of you set up such experiments? What kind of microphones have you used, acquisition sound card etc.
Thanks in advance!
Dimensional personality models (also mood) were used in music emotion research by Jonna Vuoskoski et al. (http://users.ox.ac.uk/~musf0093/publications.html) - is someone extending this work to soundscapes? Soundscape research has used e.g. the Weinstein Noise Sensitivity index, however this describes a specific trait, and not an individual's personality as a whole. There are several models for assessing soundscape quality.
Some patients have reported a shift in auditory input pitch while doing heavy physical activity. The shift only occurs for a second or so. Could it be influenced by increased pressure, decreased bloodflow, or more intrestingly by the brain?
Mystical and ecstatic experiences (both secular and religious) have often been described in terms that suggest the presence of unusual auditory or visual stimuli. Could these experiences be considered hallucinations?
Many audiophiles told me that germanium "sounds" better than silicium, and this is the reason they prefer vintage amplifiers over more modern devices that adopt silicium.
Yes, I know that germanium was the first substate available for BJTs, and that the introduction of silicium happened only years later.
Nonetheless, purists still claim that germanium is better. Indeed there is a niche market for the vintage BJTs, similar to that of vacuum tubes.
From an electronic point of view, surely there are differences, yet I don't see how they can influence the quality of sound.
So, I wonder if this preference for germanium is based on an urban legend, or if there is really something scientific, something that I can measure with an adequate instrumentation?
For example, in spaces with different temperatures:
Sound speed at 10 degrees Celsius and 50% relative humidity = 337 m/s.
Sound speed at 40 degrees Celsius and 50% relative humidity = 356 m/s.
If we have a calibrated objects vibrating at 1000 cycles per second in the cool room and the hot room, the wavelengths are 33.7 cm and 35.6 cm respectively. If sound speed was a constant 343 m/s, these wavelengths would equate to frequencies of 1017.8 and 960.8 Hz, definitely a perceivable difference. However, since the temperature differs, these different wavelengths both equate to 1000 Hz at the ear. If we perceive pitch from frequency, then these conditions will be heard as the same, but if we perceive pitch from wavelength, they will be heard differently. It seems that perception of wavelength would necessarily be binaural, since at one ear, coding is only frequency dependent.
Can we hear the difference between the same tone in a cold space and a hot space?
In perceived simultaneity, is there any convention for, or benefit to, the order of positive values (of PSS or SOA) for audiovisual stimulus pairs? Should they indicate sounds presented before flashes or vice versa? What about audiotactile and visualtactile stimuli?