Article

Impact of infrasound on the human cochlea

Authors:
If you want to read the PDF, try requesting it from the authors.

Abstract

Low-frequency tones were reported to modulate the amplitude of distortion product otoacoustic emissions (DPOAEs) indicating periodic changes of the operating point of the cochlear amplifier. The present study investigates potential differences between infrasound and low-frequency sounds in their ability to modulate human DPOAEs. DPOAEs were recorded in 12 normally hearing subjects in the presence of a biasing tone with f(B)=6Hz and a level L(B)=130dB SPL. Primary frequencies were fixed at f(1)=1.6 and f(2)=2.0kHz with fixed levels L(1)=51 and L(2)=30dB SPL. A new measure, the modulation index (MI), was devised to characterise the degree of DPOAE modulation. In subsequent measurements with biasing tones of f(B) = 12, 24 and 50Hz, L(B) was adjusted to maintain the MI as obtained individually at 6Hz. Modulation patterns lagged with increasing f(B). The necessary L(B) decreased by 12dB/octave with increasing f(B) and ran almost parallel to the published infrasound detection threshold. No signs of an abrupt change in transmission into the cochlea were found between infra- and low-frequency sounds. The results show clearly that infrasound enters the inner ear, and can alter cochlear processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The most prominent and easily measurable DPOAE in humans and other animals is the cubic difference distortion product, 2f 1 -f 2 , typically produced by primary tone ratios (f 2 /f 1 ) between 1.2 to 1.3. 18 Hensel et al. (2007) used primaries of f 1 =1.6 and f 2 =2.0 kHz (f 2 /f 1 =1.25) at L 1 =51 and L 2 =30 dB SPL for their DPOAEs recordings. 8 With the primaries within the normal human audio Chen and Narins 5 frequency range, the returning DPOAE represents a typical operating point of the cochlear amplifier. ...
... 5 Auditory cortical responses and cochlear modulations due to infrasound exposure have also been observed, despite the subjects' lack of tonal perception. 8,9 These studies provide strong evidence for infrasound impact on human peripheral and central auditory responses. ...
... It was also Chen and Narins 6 reported that some subjects perceived a "weak but clearly audible sound sensation, described as humming" but not a "tonal audible stimulus". 7,8,19 The absence of a clear pure-tone percept suggests that infrasonic frequencies do not adequately stimulate the IHCs and hence may not be the sources of the humming. Rather, the source of this percept is likely to be the harmonics of the biasing tone. ...
... By definition, infrasound refers to acoustical events that are inaudible to humans and that occur below 16-20 Hz. Despite this definition, humans can perceive sound at infrasonic (33-ranges if the wave amplitude is sufficiently elevated 35) (36) . Hensel et al. demonstrated that infrasound at 6 Hz and 130 dB penetrates the inner ear and modulates information within the cochlea. ...
... Apesar desta definição, os seres humanos conseguem percepcionar sons na gama infrasónica se a amplitude [33][34][35] da onda for suficientemente elevada . Hensel et al. [36] demonstraram que infrasons a 6 Hz e 130 dB penetram o ouvido interno e modulam a informação dentro da cóclea. Ciclos únicos de ondas de pressão acústica tornaram-se dintinguíveis abaixo dos 10 Hz [34] . ...
... The explanation for this fact is that the subjects had different reactions to infrasound and different relationships between pressure and heart rate; however, there were no significant differences between the two groups. Leventhall (2007) [46] [59] concluded that infrasound reaches the inner ear and affects information processing in the cochlea and that the lower the frequency is, the more distortion-induced otoacoustic emissions are modulated. Salt and Hullar (2010) [14] asked whether a sound that is not audible to our hearing system could affect us negatively when it reaches certain decibel levels that are considered harmful at audible frequencies. ...
... Perhaps, in certain environments, or depending on the person (as there is a population that is hypersensitive to these noises), psychological discomfort is more or less subjective, [58], [46], [89], [90]. However, as observed and researched by different authors, when a certain intensity level is exceeded, damage occurs in the organism, mainly in the inner ear, which has been shown to be the most sensitive organ to infrasound frequencies [59]. ...
Article
Full-text available
The latest technological innovations have considerably increased the field of application for infrasound, and the possible risks that infrasound may present to those exposed to it must be taken into account. The main task of the work is to organize and summarize recent studies on the most common artificial emitting sources and the effects that these non-audible frequencies have on health when absorbed by the body, as well as presenting the existing regulations, a discussion, and a series of conclusions that clarify aspects of infrasound. The intention of the authors of this work is that what is exposed in this review can be used to address and determine future lines of research and promote architects to take the spaces of installations within a building very seriously as well as carry out competent administration considering a minimum distance from the road to where habitable buildings are planned.
... Previous studies have attempted to use the amplitude of the DPOAEs as indicators of cochlear function Rebillard and Rebillard 1992; Kossowski et al., 2001; Kujawa and Liberman, 2001; Garner et al., 2008; Olzowy et al., 2008, reporting complex changes in the DPOAEs that were difficult to interpret, particularly as changes in the OP were not taken into consideration. Furthermore , other studies have taken similar approaches to investigate hearing function in humans Mrowinski et al., 1996; Hensel et al., 2007; Bian and Scherrer, 2007 and animals Frank and Kössl, 1996; Lukashkin and Russell, 2002; Bian, 2004; Sirjani et al., 2004; Salt et al., 2005 and have reported similar relationships between pressure in the ear canal, the OP, and the amplitude of the DPOAEs as those reported here. However, this study is the first to directly relate the OP obtained from CM recordings to the OP derived solely from simultaneously recorded DPOAEs. ...
... Presumably, higher-frequency bias tones would have produced a relatively larger displacement of the OC and greater modulation of the OP and would explain the more sensitive modulation of the DPOAEs reported by Bian and Scherrer 2007 who used 25–100 Hz bias tones. We have recent evidence that suggests that the sensitivity differences between the high-frequency and bias tones are abnormal with auditory disorders such as endolymphatic hy- drops Marquardt et al., 2007; Hensel et al., 2007. If the helicotrema were partially or fully blocked off due to distension of Reissner's membrane with endolymphatic hydrops, we might expect to see a larger sensitivity to the lowfrequency bias tone due to the reduced effectiveness of the helicotrema as a low-frequency shunt. ...
Article
Full-text available
Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops.
... Physiological data support the notion that IS and sounds in the typical audio frequency range share similar perceptual mechanisms. For instance, IS-induced changes of distortion product otoacoustic emissions (DPOAE) have confirmed that IS enters the inner ear and may modulate cochlear function [7,8]. Beyond that, two functional magnetic resonance imaging (fMRI) studies have found increased activation in bilateral auditory cortex (AC) in response to 12 Hz tones (at high sound pressure levels of 110 dB and above), revealing that similarities between IS and ʺnormal soundʺ persist up to early cortical processing [9,10]. ...
... The physiological mechanisms of IS processing in the cochlea are not yet fully understood. There is however strong evidence that IS at high levels enters the inner ear and leads to a modulation of the spontaneous firing rate of auditory nerve cells [33] as well as periodic changes with respect to the operating point of the cochlear amplifier [8]. Ultimately, this can alter the processing and perception of sound at higher frequencies and under certain conditions mimic an amplitude modulation of other audible sounds [34]. ...
Article
Full-text available
Low frequency noise (LFS) and infrasound (IS) are controversially discussed as potential causes of annoyance and distress experienced by many people. However, the perception mechanisms for IS in the human auditory system are not completely understood yet. In the present study, sinusoids at 32 Hz (at the lower limit of melodic pitch for tonal stimulation), as well as 8 Hz (IS range) were presented to a group of 20 normal hearing subjects, using monaural stimulation via a loudspeaker sound source coupled to the ear canal by a long silicone rubber tube. Each participant attended two experimental sessions. In the first session, participants performed a categorical loudness scaling procedure as well as an unpleasantness rating task in a sound booth. In the second session, the loudness scaling procedure was repeated while brain activation was measured using functional magnetic resonance imaging (fMRI). Subsequently, activation data were collected for the respective stimuli presented at fixed levels adjusted to the individual loudness judgments. Silent trials were included as a baseline condition. Our results indicate that the brain regions involved in processing LFS and IS are similar to those for sounds in the typical audio frequency range, i.e., mainly primary and secondary auditory cortex (AC). In spite of large variation across listeners with respect to judgments of loudness and unpleasantness, neural correlates of these interindividual differences could not yet be identified. Still, for individual listeners, fMRI activation in the AC was more closely related to individual perception than to the physical stimulus level.
... To date, few publications dealing with low-frequency modulated CDPOAEs in humans exist (Bian and Scherrer, 2007;Brown and Gibson, 2011;Hensel et al., 2007;Hirschfelder et al., 2005;Marquardt et al., 2007;Rotter et al., 2008;Scholz et al., 1999) and, to the best of our knowledge, no reports on low-frequency modulated QDPOAEs in humans have been published. ...
... The aim of this study was to compare, for the first time, the properties of low-frequency modulated CDPOAEs and QDPOAEs in humans. There are few reports on low-frequency modulated CDPOAEs in humans (Bian and Scherrer, 2007;Hensel et al., 2007;Hirschfelder et al., 2005;Marquardt et al., 2007;Rotter et al., 2008;Scholz et al., 1999) and none on low-frequency modulated QDPOAEs, which is probably related to the fact that QDPOAEs in humans are difficult to record and relatively large DPOAE amplitudes are needed to obtain stable modulation patterns. The optimal parameters for evoking QDPOAEs are therefore a major concern. ...
Article
Previous studies have used low-frequency tones to modulate distortion product otoacoustic emissions (DPOAEs). The cubic DPOAE (CDPOAE) is mostly chosen because amplitudes sufficient for modulation can be evoked with moderate sound pressure levels. Quadratic DPOAEs (QDPOAEs) however, are more sensitive to minute changes of the cochlear operating point (OP) and are better suited to assess changes of the cochlear OP. Here, we compare the properties of low-frequency (30 Hz, 80-120 dB SPL) modulated CDPOAE and QDPOAEs evoked with f(2) = 2 and 5 kHz in human subjects with normal hearing. The modulation depth was quantified with the modulation index (MI), a measure which considers both amplitude and phase. Modulated CDPOAEs evoked with f(2) = 2 kHz have amplitude maxima at the zero crossings and amplitude minima at the extremes of the biasing tone (BT) which correlate positively with the BT level. CDPOAEs evoked with f(2) = 5 kHz were recorded during biasing in exactly the same way as described before. At the highest BT levels used (120 dB SPL), very little modulation could be detected. Not only the depth, but also the shape of the QDPOAE modulation pattern is correlated with the BT level. At moderate BT levels (about 90-100 dB SPL) QDPOAEs evoked with f(2) = 5 kHz show one amplitude notch around the zero crossing of the positive going flank of the BT (a single modulation pattern). At and above a BT level of about 105 dB SPL, the pattern reverses and shows a double modulation pattern. At the highest BT level used (120 dB SPL), quadratic MIs exceed cubic MIs (2.0 ± 0.5 and 0.97 ± 0.06, respectively). Patterns of low-frequency modulated QDPOAEs in humans are similar to the modulation seen in animal studies and as predicted by mathematical models. Human low-frequency modulated QDPOAEs are ideally suited to estimate cochlear OP shifts because of their high sensitivity to the OP shift.
...  Health consequences of using headphones and earphones: There are rarely documented research in literature on the implications of excessive headphone usage by young and young adults and a few studies on the adverse effects from using the headphone and music player have been published. Moreover, the general public is not appropriately informed about the unfavourable short-term or long-term complications of the use of the headset (Hensel et al., 2007). With regards to the adverse consequences of the use of earphones, it is necessary to address the issue of whether or not its use is safe. ...
... But those with headphones can shield their ears with the use of noise cancelling headphones to prevent players from being used in loud spaces. In addition, it may also be useful to restrict the duration of listening every day or week and to take pauses during continuous use (Hensel et al., 2007). Teens and young adults must restrict the use of mobile media players for a limited period or stop using earphones in order to protect their ears. ...
Article
Full-text available
Apart from the frequency, the length of the sound exposure is a major factor in the injury to the ear. Simply specified, louder sounds will cause much less visibility and more harm. Employers provide hearing aid for workers with an average exposure of 85 dB for over 8 hours, as mandated by the Occupational Health & Safety Administration (OSHA). Although this feels like a long while, headphones can do harm in less than an hour with only marginally greater sound levels, and music can be easily imagined with headphones lasting one or more hours. Usage length of exposure loudness.
... Toisaalta, käsitys että kuulosysteemi ei pysty prosessoimaan infraääntä on haastettu useissa tutkimuk- sissa, joissa on osoitettu infraäänen aiheuttavan muutoksia kuuloelimen toimintaan sekä koe-eläimillä (Marquardt et al., 2007) että normaalikuuloisilla ihmisillä ( Hensel et al., 2007). Itse asiassa on toistuvasti osoitettu, että ihminen aistii infraääntä, mikäli äänenpainetasot ovat riittävän suuria (Robinson & Dadson, 1956;Corso, 1958;Landström et al., 1988;Moller & Pedersen, 2004;Schust, 2004). ...
... Hensel ym. pyrkivät selvittämään infraäänellä ja pientaajuisilla äänillä mahdollisia vaikutuseroja ihmisen kuulosimpukan toiminnassa ( Hensel et al., 2007). Tutkimuksessa altistettiin 12 koehenkilöä 6, 12, 24 ja 50 Hz:n äänille ja mitattiin särösyntyisen otoakustisen emission (Distortion Product Otoacoustic Emissi- on, DPOAE) vasteita. ...
Technical Report
Wind turbines produce broadband sound that also includes low frequencies. Sounds below 20 Hz are contractually referred to as infrasound. Infrasound occurs together with audible sound in both natural and built environments. Infrasound is not generally audible at levels occurring typically in the environment. The most usual effects of audible noise are annoyance and sleep disturbance. Audible sound from wind turbines is associated with annoyance, but evidence of its link to sleep disturbance is less prominent. There appears to be a difference in the prevalence of annoyance between wind power areas. In addition to sound pressure level, other factors are associated with annoyance as well. There is no scientific evidence of the effects of audible sound from wind turbines on the emergence of illnesses. Some people who reside close to wind turbines have symptoms that they associate with infrasound from wind turbines. Infrasound levels within the vicinity of wind turbines are on the same level or lower than in city centres. There is no scientific evidence that the infrasound levels present in these kinds of environments could cause negative health effects. Furthermore, in the population studies undertaken so far, symptoms have not been observed to be more prevalent close to wind turbines. However, the number of studies is relatively limited. On the other hand, strong, audible infrasound has been reported to have an effect on, for example, wakefulness. Various mechanisms have been presented through which low infrasound levels have been thought to potentially affect on health within the vicinity of wind turbines. Similar levels also appear elsewhere in built environments. It has been indicated that infrasound can cause the appearance of symptoms connected with vestibular disorders in sensitive groups of people (anomalies in the structure of the ear, hearing-related and vestibular diseases). On the other hand, in one experimental study it has been reported that infrasound also activates other brain areas than those responsible for hearing. Scientific studies on the effects of exposure to infrasound and to audible noise from wind turbines are rather limited, thus additional studies are justified.
... Given a f SOAE ranging from 20 to100 Hz, the delay constants vary from 3.5 to 0.8 ms from low to high biasing frequencies. These time constants are in close agreement with the observed values in Figs. 10 and 11, those for amplitudes of SOAEs Bian and Watts, 2008, and DPOAEs Bian et al., 2004; Bian and Scherrer, 2007; Hensel et al., 2007. The short-time delays are comparable with the onset delays of SOAE suppression Schloth and Zwicker, 1983; Murphy et al., 1995 and the auditory nerve responses van der Heijden and Joris, 2005. ...
Article
It was previously reported that low-frequency biasing of cochlear structures can suppress and modulate the amplitudes of spontaneous otoacoustic emissions (SOAEs) in humans [Bian, L. and Watts, K. L. (2008). "Effects of low-frequency biasing on spontaneous otoacoustic emissions: Amplitude modulation," J. Acoust. Soc. Am. 123, 887-898]. In addition to amplitude modulation, the bias tone produced an upward shift of the SOAE frequency and a frequency modulation. These frequency effects usually occurred prior to significant modifications of SOAE amplitudes and were dependent on the relative strength of the bias tone and a particular SOAE. The overall SOAE frequency shifts were usually less than 2%. A quasistatic modulation pattern showed that biasing in either positive or negative pressure direction increased SOAE frequency. The instantaneous SOAE frequency revealed a "W-shaped" modulation pattern within one biasing cycle. The SOAE frequency was maximal at the biasing extremes and minimized at the zero crossings of the bias tone. The temporal modulation of SOAE frequency occurred with a short delay. These static and dynamic effects indicate that modifications of the mechanical properties of the cochlear transducer could underlie the frequency shift and modulation. These biasing effects are consistent with the suppression and modulation of SOAE amplitude due to shifting of the cochlear transducer operating point.
... For example, it has been shown repeatedly that given a high enough sound pressure level, IS can very well be perceived (Robinson and Dadson, 1956; Corso, 1958; Whittle et al., 1972; Yeowart and Evans, 1974; Landstroem et al., 1983; Verzini et al., 1999; Schust, 2004; Møller and Pedersen, 2004). In addition, 2 studies revealed IS-induced changes of the distortion product otoacoustic emissions (DPOAEs) in animals (Marquardt et al., 2007), as well as in normally hearing human participants (Hensel et al., 2007). Since the DPOAE response is generally used as an objective indicator to examine cochlear amplification mediated by the outer hair cells, these findings clearly speak against the traditional view that IS has no influence on inner ear function. ...
... Using cochlear monitoring, Hensel et al. (2007) showed that cochlear processing was altered after exposure to infrasound of 6 Hz at a sound pressure level of 130 dB. Based on this and other research, Salt and Hullar (2010) suggested that low-frequency sound can stimulate the outer hair cells at sound pressure levels 40 dB below the threshold of hearing, when the inner hair cells are not stimulated. ...
... Using cochlear monitoring, Hensel et al. (2007) showed that cochlear processing was altered after exposure to infrasound of 6 Hz at a sound pressure level of 130 dB. Based on this and other research, Salt and Hullar (2010) suggested that low-frequency sound can stimulate the outer hair cells at sound pressure levels 40 dB below the threshold of hearing, when the inner hair cells are not stimulated. ...
Book
Full-text available
In response to growing public concern about the potential health effects of wind turbine noise, the Government of Canada, through the Minister of Health (the Sponsor), asked the Council of Canadian Academies (the Council) to conduct an assessment of the question: Is there evidence to support a causal association between
... Some physiological changes have, however, been demonstrated in humans exposed to infrasound as shown in one functional MRI study where 110 dB infrasound at a 12 Hz tone activated areas of the primary auditory cortex in the brain [81]. Infrasound at 6 Hz and 130 dB was also able to affect Distortion Product Otoacustic Emissions (DPOAE) in humans [82]. The exposure in these studies was above 100 dB(G) and may be audible to some individuals. ...
Article
Full-text available
Wind turbine noise exposure and suspected health-related effects thereof have attracted substantial attention. Various symptoms such as sleep-related problems, headache, tinnitus and vertigo have been described by subjects suspected of having been exposed to wind turbine noise. This review was conducted systematically with the purpose of identifying any reported associations between wind turbine noise exposure and suspected health-related effects. A search of the scientific literature concerning the health-related effects of wind turbine noise was conducted on PubMed, Web of Science, Google Scholar and various other Internet sources. All studies investigating suspected health-related outcomes associated with wind turbine noise exposure were included. Wind turbines emit noise, including low-frequency noise, which decreases incrementally with increases in distance from the wind turbines. Likewise, evidence of a dose-response relationship between wind turbine noise linked to noise annoyance, sleep disturbance and possibly even psychological distress was present in the literature. Currently, there is no further existing statistically-significant evidence indicating any association between wind turbine noise exposure and tinnitus, hearing loss, vertigo or headache. Selection bias and information bias of differing magnitudes were found to be present in all current studies investigating wind turbine noise exposure and adverse health effects. Only articles published in English, German or Scandinavian languages were reviewed. Exposure to wind turbines does seem to increase the risk of annoyance and self-reported sleep disturbance in a dose-response relationship. There appears, though, to be a tolerable level of around LAeq of 35 dB. Of the many other claimed health effects of wind turbine noise exposure reported in the literature, however, no conclusive evidence could be found. Future studies should focus on investigations aimed at objectively demonstrating whether or not measureable health-related outcomes can be proven to fluctuate depending on exposure to wind turbines.
... It is not yet clear, whether some of the symptoms are psychosomatic. It is also not yet clear, whether the effects of low-frequency noise can be traced back solely to cochlear stimulation or other sound transmission paths [9]. The large number of isolated facts listed above are known, but they do not yet reveal cause-and-effect principles, which could be applied for practical purposes, such as legal requirements. ...
... Using cochlear monitoring, Hensel et al. (2007) showed that cochlear processing was altered after exposure to infrasound of 6 Hz at a sound pressure level of 130 dB. Based on this and other research, Salt and Hullar (2010) suggested that low-frequency sound can stimulate the outer hair cells at sound pressure levels 40 dB below the threshold of hearing, when the inner hair cells are not stimulated. ...
... The slope was approximately 12 dB/oct for frequencies below 50 Hz and 6 dB/oct for frequencies from 50 up to 150 Hz. The curve therefore includes the shunt effect of the helicotrema which was assumed to be dominant for frequencies below 50 Hz ( Hensel et al., 2007;Marquardt et al., 2007). The form of the METF for frequencies above 150 Hz was made consistent with the METF described in the American National Standards Institute (ANSI) standard for calculation of loudness (ANSI, 2007). ...
Article
Full-text available
Auditory filter shapes were derived for signal frequencies (f(s)) between 50 and 1000 Hz, using the notched-noise method. The masker spectrum level (N(0)) was 50 dB (re 20 μPa). For f(s) = 63 and 50 Hz, measurements were also made with N(0) = 62 dB for the lower band. The data were fitted using a rounded-exponential filter model, with special consideration of the filtering effects of the middle-ear transfer function (METF) at low frequencies. The results showed: (1) For very low values of f(s), the lower skirts of the filters were only well defined when N(0) = 62 dB for the lower band; (2) the sharpness of both sides of the filters decreased with decreasing f(s); (3) the dynamic range of the filters decreased with decreasing f(s); (4) the equivalent rectangular bandwidth of the filters decreased with decreasing f(s) down to f(s) = 80 Hz, but increased for f(s) below that; (5) the assumed METF, which includes the shunt effect of the helicotrema for frequencies below 50 Hz, increasingly influenced the low-frequency skirt of the filters as f(s) decreased; and (6) detection efficiency worsened with decreasing f(s) for f(s) between 100 and 500 Hz, but improved slightly below that.
... The 2f 1 ef 2 component has been demonstrated to be less sensitive to operating point change (Sirjani et al., 2004;Brown et al., 2009). Using different criteria of bias-induced distortion modulation, the dependence on bias frequency was systematically studied in humans for frequencies down to 25 Hz, 6 Hz and 15 Hz respectively (Bian and Scherrer, 2007;Hensel et al., 2007;Marquardt et al., 2007). In each of these studies, the bias levels required were above those that are heard by humans, but in all of them the change of sensitivity with frequency followed a substantially lower slope than the hearing sensitivity change as shown in Fig. 5. Again this may reflect the OHC origins of acoustic emissions, possibly combined with the processes responsible for the flattening of equal loudness contours for higher level stimuli, since the acoustic emissions methods are using probe stimuli considerably above threshold. ...
Article
Infrasonic sounds are generated internally in the body (by respiration, heartbeat, coughing, etc) and by external sources, such as air conditioning systems, inside vehicles, some industrial processes and, now becoming increasingly prevalent, wind turbines. It is widely assumed that infrasound presented at an amplitude below what is audible has no influence on the ear. In this review, we consider possible ways that low frequency sounds, at levels that may or may not be heard, could influence the function of the ear. The inner ear has elaborate mechanisms to attenuate low frequency sound components before they are transmitted to the brain. The auditory portion of the ear, the cochlea, has two types of sensory cells, inner hair cells (IHC) and outer hair cells (OHC), of which the IHC are coupled to the afferent fibers that transmit "hearing" to the brain. The sensory stereocilia ("hairs") on the IHC are "fluid coupled" to mechanical stimuli, so their responses depend on stimulus velocity and their sensitivity decreases as sound frequency is lowered. In contrast, the OHC are directly coupled to mechanical stimuli, so their input remains greater than for IHC at low frequencies. At very low frequencies the OHC are stimulated by sounds at levels below those that are heard. Although the hair cells in other sensory structures such as the saccule may be tuned to infrasonic frequencies, auditory stimulus coupling to these structures is inefficient so that they are unlikely to be influenced by airborne infrasound. Structures that are involved in endolymph volume regulation are also known to be influenced by infrasound, but their sensitivity is also thought to be low. There are, however, abnormal states in which the ear becomes hypersensitive to infrasound. In most cases, the inner ear's responses to infrasound can be considered normal, but they could be associated with unfamiliar sensations or subtle changes in physiology. This raises the possibility that exposure to the infrasound component of wind turbine noise could influence the physiology of the ear.
... Despite the poor perception by humans, LF sound has been used extensively to study the properties of OHCs during slow and large movements of the cochlear partition (Scholz et al. 1999;Bian and Scherrer 2007;Hensel et al. 2007;Drexl et al. 2012), but LF sound can also cause alterations of inner ear properties which outlast the duration of the LF stimulation. This has been found more than half a century ago in experiments with human subjects where the hearing threshold was tracked after presentation of intense, LF sound. ...
Article
Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions processing even much higher frequencies. These alterations outlast the duration of the low-frequency stimulation by several minutes, for which the term 'bounce phenomenon' has been coined. Previously, we have shown that the bounce can be traced by monitoring frequency and level changes of spontaneous otoacoustic emissions (SOAEs) over time. It has been suggested elsewhere that large receptor potentials elicited by low-frequency stimulation produce a net Ca(2+) influx and associated gain decrease in outer hair cells. The bounce presumably reflects an underdamped, homeostatic readjustment of increased Ca(2+) concentrations and related gain changes after low-frequency sound offset. Here, we test this hypothesis by activating the medial olivocochlear efferent system during presentation of the bounce-evoking low-frequency (LF) sound. The efferent system is known to modulate outer hair cell Ca(2+) concentrations and receptor potentials, and therefore, it should modulate the characteristics of the bounce phenomenon. We show that simultaneous presentation of contralateral broadband noise (100 Hz-8 kHz, 65 and 70 dB SPL, 90 s, activating the efferent system) and ipsilateral low-frequency sound (30 Hz, 120 dB SPL, 90 s, inducing the bounce) affects the characteristics of bouncing SOAEs recorded after low-frequency sound offset. Specifically, the decay time constant of the SOAE level changes is shorter, and the transient SOAE suppression is less pronounced. Moreover, the number of new, transient SOAEs as they are seen during the bounce, are reduced. Taken together, activation of the medial olivocochlear system during induction of the bounce phenomenon with low-frequency sound results in changed characteristics of the bounce phenomenon. Thus, our data provide experimental support for the hypothesis that outer hair cell calcium homeostasis is the source of the bounce phenomenon.
... The slope was approximately 12 dB/oct for frequencies below 50 Hz and 6 dB/oct for frequencies from 50 up to 150 Hz. The curve therefore includes the shunt effect of the helicotrema which was assumed to be dominant for frequencies below 50 Hz (Hensel et al., 2007;). The form of the METF for frequencies above 150 Hz was made consistent with the METF described in the ANSI standard for calculation of loudness (ANSI, 2007). ...
Thesis
Full-text available
A fundamental property of the auditory system is its frequency resolving power. This allows us to process sound in such a way as to provide an effective frequency analysis of it. Understanding the properties of specific filtering units, the auditory filters, across frequency, has been essential in the development of auditory models that describe our perception of sound. Unfortunately, little has been known about the characteristics of frequency selectivity in the low-frequency range (i.e. below 200 Hz) and practically no data existed for frequencies below 100 Hz, due to various complicating factors. Nevertheless, a proper description of frequency selectivity at low frequencies has long time been needed, not only to advance our limited knowledge, but in light of the many problems produced by low-frequency noise. The subject of this PhD thesis concerns a detailed characterization of human frequency selectivity in the low-frequency range. A series of experiments have been carried out with this aim. Careful considerations were necessary to provide a proper control of the acoustic signals used in the experiments. Modifications were also necessary in aspects of existing methodologies for their applicability in the low-frequency range. As well, special attention has been given to factors thought to influence low-frequency hearing, such as the filtering effects attributed to the middle-ear-transfer function (METF). In the first experiment, characteristics of frequency selectivity were compared across frequency, considering signal frequencies (fs) from 50 up to 1000 Hz, using the notched-noise method. To obtain an adequate description of the lower filter skirt at the lowest values of fs, the lower flanking band had to be emphasized. A main outcome from this experiment was to infer the high degree of influence the METF has on tuning (the latter transfer function was assumed to include the effects of the helicotrema shunt). Results suggested that the METF increasingly sharpens (i.e. defines) the lower skirt of the auditory filter, especially in the frequency range below 100 Hz. Also, in this range, the efficiency in the detection process was found to moderately improve. A second experiment was carried out to extend results down to fs = 31.5 Hz, while also providing a setup that allowed higher masker levels without distortion. The method was to measure a psychophysical tuning curve, which could provide more direct estimates of the shape of the auditory filter. This time, rough estimates of the METF were obtained for each subject by measuring an equal-loudness contour (ELC). The results were in overall agreement with those of the previous experiment. The use of lower values of fs together with the ELCs allowed to resolve the center frequency (CF) of the most apical auditory filter, which appears to be located between about 40 to 50 Hz. Signals below that would be detected via the low-frequency skirt of this “bottom” auditory filter. In both experiments it was found that tuning was affected for fs below 80 Hz, which made the bandwidth of the auditory filter increase below that. An analysis of the possible effect on tuning of individual METFs (using objective estimates of the latter) could largely explain this phenomenon. Furthermore, individual differences could be expected to increase with decreasing fs, as was observed in the psychoacoustical measures. In a third experiment the relationship between perceived loudness for sinusoids and objective estimates of the METF obtained from distortion-product-isomodulation curves (DPIMC) was investigated. The outcome suggested that these are closely connected, and that the specific frequency dependence of the ELCs is not accounted for in standardized isophon curves. Although qualitatively similar, the ELCs were steeper than the DPIMCs, especially below about 40 Hz. This comparison allowed to improve the interpretation of the auditory filtering process inferred from the previous experimental results. The evidence found in this work suggests that the helicotrema increasignly affects auditory tuning as CFs approach the apical end of the cochlea; decreasing the tuning power of the hearing organ and setting a limit as to where the auditory filter with the lowest CF can be located. Results are expected to contribute to the further development of auditory models –and standardized isophon curves, for their proper applicability in the low-frequency range.
... This is exactly what is to be expected, since infrasound enters into the hearing system, and is transmitted to the brain, in a similar manner to higher frequency sounds. To quote from Hensel (2007): ...
Article
Full-text available
There is little substance in this Acoustics Today paper for either Ghost Stories or Wind Turbines. So if we take away "Wind Turbines and Ghost Stories" from the title of the paper we are left with "The effects of infrasound on the human auditory system", which is what the paper really covers and where it should have retained its focus, with hard and well supported facts. Attempts to enhance its interest by straying into areas which are better suited to the popular media, have failed scientifically. The association of the levels of infrasound from wind turbines, as experienced at residences, with the effects of high levels of infrasound used in controlled laboratory experiments is also very weak. Effects at a level of, say, 60dB should not be compared with those at 120dB. The paper bases its opinions on false information on the levels of infrasound from wind turbines as experienced at residences. Consequently, its references to wind turbines are largely invalid. There is also the important matter of the social responsibility of scientists, who should present balanced and clear material to the wider public. Chen and Narins may not have been aware of the confusion, misconceptions and distortions which envelop the topic of infrasound from wind turbines. This is partly due to Pierpont's unproven claims of direct pathophysiological effects (Wind Turbine Syndrome), consequent upon exposure to low levels of infrasound, and which have been picked up by all objector web pages. Care should be taken to ensure that these web pages are not supplied with incorrect and unsustainable material. One outcome of the paper is a letter to a newspaper from a Fellow of the Acoustical Society of America, from which the following is extracted: "My concern has been strengthened after receiving the April 2012 copy of Acoustics Today, a publication of The Acoustic Society of America. It has a timely relevant technical article titled "Wind Turbines...The Effects of Infrasonics on The Human Auditory System," by Annie Chen and Peter Narins, UCLA specialists in Neuroscience & Ecology & Bio-Acoustics. These specialized trained scientists have concluded that low frequency noises in the 19 Hz range of the intensity typical of windmill generators can have psychosomatic adverse effects on humans, such as depression, anxiety, irritability, insomnia and psychosis." 5 www.theday.com/ article20120623/OP02/306239979 This is a clear example of the further misinterpretations that can follow a publication which, as demonstrated above, includes poor, and possibly biased, interpretation of its sources.
... This surprisingly high level of sensitivity of OHCs to LF (when compared with IHC activation and perceptual threshold) is strongly supported by recent work examining the spontaneous otoacoustic emissions in humans ; see also Drexl, Otto, et al., 2016;Jeanson, Wiegrebe, Gu¨rkov, Krause, & Drexl, 2017;Kugler et al., 2014). It has been known for quite some time using human distortion product otoacoustic emissions (e.g., Hensel, Scholz, Hurttig, Mrowinski, & Janssen, 2007) as well as in vivo animal data (Patuzzi, Sellick, & Johnstone, 1984) that LF and IS do affect cochlear processing and that the cochlea aqueduct does pass IS frequencies into the inner ear (Traboulsi & Avan, 2007). The perceptual and other downstream consequences, however, are still not well studied. ...
Article
Full-text available
This review considers the nature of the sound generated by wind turbines focusing on the low-frequency sound (LF) and infrasound (IS) to understand the usefulness of the sound measures where people work and sleep. A second focus concerns the evidence for mechanisms of physiological transduction of LF/IS or the evidence for somatic effects of LF/IS. While the current evidence does not conclusively demonstrate transduction, it does present a strong prima facia case. There are substantial outstanding questions relating to the measurement and propagation of LF and IS and its encoding by the central nervous system relevant to possible perceptual and physiological effects. A range of possible research areas are identified.
... In the past two decades, the bulk of infrasound research has focused on "wind turbine syndrome, " the effect of low frequency sounds from wind turbines that have been reported to cause sleep disturbance, headaches, difficulty concentrating, irritability, fatigue, dizziness, tinnitus, and aural pain (44)(45)(46)(47)(48)(49)(50). This phenomenon is not fully understood, and ongoing research continues to study how low-level infrasound may be causing vestibular consequences (51)(52)(53)(54)(55)(56). ...
Article
Full-text available
Objective: We aim to examine the existing literature on, and identify knowledge gaps in, the study of adverse animal and human audiovestibular effects from exposure to acoustic or electromagnetic waves that are outside of conventional human hearing. Design/Setting/Participants: A review was performed, which included searches of relevant MeSH terms using PubMed, Embase, and Scopus. Primary outcomes included documented auditory and/or vestibular signs or symptoms in animals or humans exposed to infrasound, ultrasound, radiofrequency, and magnetic resonance imaging. The references of these articles were then reviewed in order to identify primary sources and literature not captured by electronic search databases. Results: Infrasound and ultrasound acoustic waves have been described in the literature to result in audiovestibular symptomology following exposure. Technology emitting infrasound such as wind turbines and rocket engines have produced isolated reports of vestibular symptoms, including dizziness and nausea and auditory complaints, such as tinnitus following exposure. Occupational exposure to both low frequency and high frequency ultrasound has resulted in reports of wide-ranging audiovestibular symptoms, with less robust evidence of symptomology following modern-day exposure via new technology such as remote controls, automated door openers, and wireless phone chargers. Radiofrequency exposure has been linked to both auditory and vestibular dysfunction in animal models, with additional historical evidence of human audiovestibular disturbance following unquantifiable exposure. While several theories, such as the cavitation theory, have been postulated as a cause for symptomology, there is extremely limited knowledge of the pathophysiology behind the adverse effects that particular exposure frequencies, intensities, and durations have on animals and humans. This has created a knowledge gap in which much of our understanding is derived from retrospective examination of patients who develop symptoms after postulated exposures. Lubner et al. Audiovestibular Symptomology Following Energy Exposure Conclusion and Relevance: Evidence for adverse human audiovestibular symptomology following exposure to acoustic waves and electromagnetic energy outside the spectrum of human hearing is largely rooted in case series or small cohort studies. Further research on the pathogenesis of audiovestibular dysfunction following acoustic exposure to these frequencies is critical to understand reported symptoms.
... To further explore potential TACS effects, we analyzed various alternative measures of phasic DPOAE modulation, e.g., we added or focused on sidebands of second or third order, normalized sidebands to DPOAE or noise floor, or extracted modulation-period patterns (Marquardt, Hensel, Mrowinski, & Scholz, 2007) or a modulation index incorporating additional phase information (Hensel, Scholz, Hurttig, Mrowinski, & Janssen, 2007). Analyzing these measures as above did not qualitatively change our results (all corrected P>0.05). ...
... To further explore potential TACS effects, we analyzed various alternative measures of phasic DPOAE modulation, e.g., we added or focused on sidebands of second or third order, normalized sidebands to DPOAE or noise floor, or extracted modulation-period patterns (Marquardt, Hensel, Mrowinski, & Scholz, 2007) or a modulation index incorporating additional phase information (Hensel, Scholz, Hurttig, Mrowinski, & Janssen, 2007). Analyzing these measures as above did not qualitatively change our results (all corrected P>0.05). ...
... Fig 5C), which might reflect an active process to reduce the peripheral entrainment of auditory stimuli during visual attention. This possible mechanism would be in agreement with studies showing that low-frequency oscillations can modify the mechanical sensitivity of the cochlear receptor [18,38]. On the other hand, during auditory attention, cochlear oscillations precede EEG low-frequency oscillations, and less jitter is observed (Figs 4B and 5D), thus allowing entrainment of cochlear responses to auditory stimuli. ...
Article
Full-text available
Evidence shows that selective attention to visual stimuli modulates the gain of cochlear responses, probably through auditory-cortex descending pathways. At the cerebral cortex level, amplitude and phase changes of neural oscillations have been proposed as a correlate of selective attention. However, whether sensory receptors are also influenced by the oscillatory network during attention tasks remains unknown. Here, we searched for oscillatory attention-related activity at the cochlear receptor level in humans. We used an alternating visual/auditory selective attention task and measured electroencephalographic activity simultaneously to distortion product otoacoustic emissions (a measure of cochlear receptor-cell activity). In order to search for cochlear oscillatory activity, the otoacoustic emission signal, was included as an additional channel in the electroencephalogram analyses. This method allowed us to evaluate dynamic changes in cochlear oscillations within the same range of frequencies (1–35 Hz) in which cognitive effects are commonly observed in electroencephalogram works. We found the presence of low frequency (<10 Hz) brain and cochlear amplifier oscillations during selective attention to visual and auditory stimuli. Notably, switching between auditory and visual attention modulates the amplitude and the temporal order of brain and inner ear oscillations. These results extend the role of the oscillatory activity network during cognition in neural systems to the receptor level.
... Fig 5C), which might reflect an active process to reduce the peripheral entrainment of auditory stimuli during visual attention. This possible mechanism would be in agreement with studies showing that low-frequency oscillations can modify the mechanical sensitivity of the cochlear receptor [18,38]. On the other hand, during auditory attention, cochlear oscillations precede EEG low-frequency oscillations, and less jitter is observed (Figs 4B and 5D), thus allowing entrainment of cochlear responses to auditory stimuli. ...
Preprint
Full-text available
Evidence shows that selective attention to visual stimuli modulates the gain of cochlear responses, probably through auditory-cortex descending pathways. At the cerebral cortex level, amplitude and phase changes of neural oscillations have been proposed as a correlate of selective attention. However, whether sensory receptors are also influenced by the oscillatory network during attention tasks remains unknown. Here, we searched for oscillatory attention-related activity at the cochlear receptor in humans. We used an alternating visual/auditory selective attention task and measured electroencephalographic activity simultaneously to distortion product otoacoustic emissions (a measure of cochlear receptor-cell activity). In order to search for cochlear oscillatory activity, the otoacoustic emission signal, was included as an additional channel in the electroencephalogram analyses. This method allowed us to study dynamic changes of cochlear oscillations in the same range of frequencies (1-35 Hz) in which cognitive effects are commonly observed in electroencephalogram works. We found the presence of low frequency (<10 Hz) brain and cochlear amplifier oscillations during periods of selective attention to visual and auditory stimuli. Notably, switching between auditory and visual attention modulates the amplitude and the temporal order of brain and inner ear oscillations. These results extend the role of the oscillatory activity network during cognition in neural systems to the receptor level.
... For example, animal studies show that exposure to infrasound can modulate the endocochlear potential, leading to a change in the electrochemical voltage that drives the receptor current through the transduction channels of the auditory hair cells (Salt et al., 2013;Salt and DeMott, 1999). Moreover, high-intensity, low-frequency bias tones can alter distortion product otoacoustic emissions and shift the frequency and level of spontaneous otoacoustic emissions, indicating that cochlear processing is affected by infrasound in humans ( Hensel et al., 2007;Kugler et al., 2014;Marquardt et al., 2007). The long-term consequences of exposure to infrasound are not clear, but subjective reports claim that exposure to infrasound affects sleep habits, dis- rupts work performance, and compromises the well-being of the population (e.g., review Baliatsas et al., 2016). ...
Article
The transmission of infrasound within the human ear is not well understood. To investigate infrasound propagation through the middle and inner ear, velocities of the stapes and round window membrane were measured to very low frequencies (down to 0.9 Hz from 2000 Hz) in fresh cadaveric human specimens. Results from ear-canal sound stimulation responses show that below 200 Hz, the middle ear impedance is dominated by its stiffness term, limiting sound transmission to the inner ear. During air-conduction, normal ears have approximately equal volume velocities at the oval (stapes) and round windows, known as a two-window system. However, perturbing the impedance of the inner ear with a superior canal dehiscence (SCD), a pathological opening of the bone surrounding the semicircular canal, breaks down this simple two-window system. SCD changes the volume velocity flow in the inner ear, particularly at low frequencies. The experimental findings and model predictions in this study demonstrate that low-frequency auditory and vestibular sound transmission can be affected by a change in the inner-ear impedance due to a SCD.
... Además, el ruido de baja frecuencia es reconocido como un problema ambiental por la Organización Mundial de la Salud ( Berglund, Lindvall, Schwela, & Goh, 2000). Se decidió utilizar el software MATLAB para manejar todo lo relacionado a la programación y procesamiento de señales, debido a la variedad de herramientas disponibles en este y a que se ha utilizado previamente con éxito en la realización de experimentos similares ( Hensel, et al, 2007). Diseñar e implementar un sistema para la medición de curvas isofónicas a bajas frecuencias con baja distorsión armónica. ...
Thesis
The aim of this project is the design and implementation of a system for generating low frequencies at high levels with low harmonic distortion in order to perform psychoacoustic experiments, specifically, to measure absolute threshold and equal-loudness-level contours (ELCs) monaurally. By means of the interaction between electroacoustic components and MATLAB code, various input and output parameters are configured to gather precise data from a calibrated system. The electroacoustic chain interacts with a psychoacoustic routine based on psychophysical methods, communicating with the participants through a purpose-built push button. The document explains in detail the operation of the system using figures and tables giving a clear idea of its decent performance. The system works efficiently in the frequency range from 10 to 200 Hz with a low value of harmonic distortion and sufficient sound pressure level in the output, thus making it consistent to run psychoacoustic pilot tests.
... This view was supported by a number of studies conducted in animals as well as in humans demonstrating that the auditory system is equipped with several shunting and attenuation mechanisms, which are already involved in early stages of signal processing and make hearing at low frequencies quite insensitive [2][3][4][5][6][7]. However, the notion that IS cannot be processed within the auditory system has been contested by several studies, in which IS-induced changes of cochlear function in animals [8] as well as in normally hearing human participants [9]) have been documented. In fact, it has been shown repeatedly that IS can also be perceived by humans, if administered at very high sound pressure levels (SPLs) [10][11][12][13][14][15][16][17]). ...
Article
Full-text available
In the present study, the brain’s response towards near- and supra-threshold infrasound (IS) stimulation (sound frequency < 20 Hz) was investigated under resting-state fMRI conditions. The study involved two consecutive sessions. In the first session, 14 healthy participants underwent a hearing threshold—as well as a categorical loudness scaling measurement in which the individual loudness perception for IS was assessed across different sound pressure levels (SPL). In the second session, these participants underwent three resting-state acquisitions, one without auditory stimulation (no-tone), one with a monaurally presented 12-Hz IS tone (near-threshold) and one with a similar tone above the individual hearing threshold corresponding to a ‘medium loud’ hearing sensation (supra-threshold). Data analysis mainly focused on local connectivity measures by means of regional homogeneity (ReHo), but also involved independent component analysis (ICA) to investigate inter-regional connectivity. ReHo analysis revealed significantly higher local connectivity in right superior temporal gyrus (STG) adjacent to primary auditory cortex, in anterior cingulate cortex (ACC) and, when allowing smaller cluster sizes, also in the right amygdala (rAmyg) during the near-threshold, compared to both the supra-threshold and the no-tone condition. Additional independent component analysis (ICA) revealed large-scale changes of functional connectivity, reflected in a stronger activation of the right amygdala (rAmyg) in the opposite contrast (no-tone > near-threshold) as well as the right superior frontal gyrus (rSFG) during the near-threshold condition. In summary, this study is the first to demonstrate that infrasound near the hearing threshold may induce changes of neural activity across several brain regions, some of which are known to be involved in auditory processing, while others are regarded as keyplayers in emotional and autonomic control. These findings thus allow us to speculate on how continuous exposure to (sub-)liminal IS could exert a pathogenic influence on the organism, yet further (especially longitudinal) studies are required in order to substantialize these findings.
... Unfortunately, in the presented study, the G-weighted infrasound levels were significant and at many workplaces stayed within or exceeded the occupational exposure limits Infrasound can cause adverse effects on the vestibulocochlear organ (audible effects and influence on body equilibrium) and generate audible and various non-audible effects, for example [17], psychological and mental reactions. Some studies have indicated that prolonged exposure to infrasonic noise at levels of about 90 dB-G may cause a lot of psychological and mental reactions: headaches, drowsiness, excessive fatigue, sluggishness, slowing of reaction time, decrease of psychomotor efficiency, irritation, hearing loss, and increase in psychological tension. ...
Article
Full-text available
[b]Abstract[/b] [b]Introduction and objectives[/b]. Although exposure to audible noise has been examined in many publications, the sources of infrasound in agriculture have not been fully examined and presented. The study presents the assessment of exposure to infrasound from many sources at workplaces in agriculture with examples of possible ergonomic and health consequences caused by such exposure.[b] [/b][b]Materials and method[/b]. Workers’-perceived infrasonic noise levels were examined for 118 examples of moving and stationary agricultural machines (modern and old cab-type tractors, old tractors without cabins, small tractors, grinders, chargers, forage mixers, grain cleaners, conveyors, bark sorters and combine-harvesters). Measurements of infrasound were taken with the use of class 1 instruments (digital sound analyzer DSA-50 digital and acoustic calibrator). Noise level measurements were performed in accordance with PN-Z-01338:2010, PN-EN ISO 9612:2011 and ISO 9612:2009. [b]Results and conclusions[/b]. The most intense sources of infrasound in the study were modern and old large size types agricultural machinery (tractors, chargers and combined-harvesters, and stationary forage mixers with ventilation). The G-weighted infrasound levels were significant and at many analyzed workplaces stayed within or exceeded the occupational exposure limit (LG eq, 8h = 102 dB) when the duration of exposure is longer than 22 min./8-hours working day (most noisy – modern cab-type tractors), 46 min./8 hours working day (most noisy – old type cab-tractors), 73 min./8 hours working day (most noisy – old tractors without cabins), 86 min./8-hours working day (most noisy – combine-harvesters) and 156 min./8 hours working day (most noisy – stationary forage mixers with ventilation). All measured machines generated infrasonic noise exceeded the value LG eq, Te = 86 dB (occupational exposure limit for workplaces requiring maintained mental concentration). A very important harmful factor is infrasound exposure for pregnant women and adolescents at workplaces in agriculture. Very valuable work can be technical limiting exposure to infrasound from new and used agricultural machinery. The technical limitation of infrasound caused by both old and new agricultural machinery can be invaluable from the work point of view.
... The abruptly increasing slope below the resonance was explained by the shunt impedance of the helicotrema. Hensel et al. (2007) extended the human METF measurements to the infrasound range, and showed that inertia dominates the METF down to at least 6 Hz. ...
Article
Full-text available
Below approximately 40 Hz, the cochlear travelling wave reaches the apex, and differential pressure is shunted through the helicotrema, reducing hearing sensitivity. Just above this corner frequency, a resonance feature is often observed in objectively measured middle-ear-transfer functions (METFs). This study inquires whether overall and fine structure characteristics of the METF are also perceptually evident. Equal-loudness-level contours (ELCs) were measured between 20 and 160 Hz for 14 subjects in a purpose-built test chamber. In addition, the inverse shapes of their METFs were obtained by adjusting the intensity of a low-frequency suppressor tone to maintain an equal suppression depth of otoacoustic emissions for various suppressor tone frequencies (20–250 Hz). For 11 subjects, the METFs showed a resonance. Six of them had coinciding features in both ears, and also in their ELC. For two subjects only the right-ear METF was obtainable, and in one case it was consistent with the ELC. One other subject showed a consistent lack of the feature in their ELC and in both METFs. Although three subjects displayed clear inconsistencies between both measures, the similarity between inverse METF and ELC for most subjects shows that the helicotrema has a marked impact on low-frequency sound perception.
Article
In the present paper, the effect of blade cone angle on low-frequency noise of horizontal-axis wind turbines is studied to investigate the noise reduction, by adjusting the blade cone angle in different wind speeds. In wind turbines, a significant part of the noise is in infrasound range to which exposure could have adverse effects on human health. In this study, a small turbine is selected as a test case, and the sound field is simulated for different blade cone angles from 0° to 10° with a wind speed range between 5 and 25 m/s. The results of flow simulation show that the change in the output power is < 5% in comparison with the planar turbine. In addition, the calculated results for the low-frequency noise demonstrate that the blade cone angle significantly affects the noise value and the directivity pattern. The investigation of the directivity pattern shows that blade cone angle could have opposite effects on different observer positions, and therefore, the noise calculation at only one position is not enough to conclude about the effect of blade cone angle. The obtained directivity also shows that the turbines with blade cone angles of 2.5° and 5° are preferred in order to reduce the maximum noise, in comparison with the planar turbine. Overall, it is concluded that the adjustment of blade cone angle in different wind speeds can be an effective solution for reducing the low-frequency noise.
Article
Intense, low-frequency sound presented to the mammalian cochlea induces temporary changes of cochlear sensitivity, for which the term ‘Bounce’ phenomenon has been coined. Typical manifestations are slow oscillations of hearing thresholds or the level of otoacoustic emissions. It has been suggested that these alterations are caused by changes of the mechano-electrical transducer transfer function of outer hair cells (OHCs). Shape estimates of this transfer function can be derived from low-frequency-biased distortion product otoacoustic emissions (DPOAE). Here, we tracked the transfer function estimates before and after triggering a cochlear Bounce. Specifically, cubic DPOAEs, modulated by a low-frequency biasing tone, were followed over time before and after induction of the cochlear Bounce. Most subjects showed slow, biphasic changes of the transfer function estimates after low-frequency sound exposure relative to the preceding control period. Our data show that the operating point changes biphasically on the transfer function with an initial shift away from the inflection point followed by a shift towards the inflection point before returning to baseline values. Changes in transfer function and operating point lasted for about 180 s. Our results are consistent with the hypothesis that intense, low-frequency sound disturbs regulatory mechanisms in OHCs. The homeostatic readjustment of these mechanisms after low-frequency offset is reflected in slow oscillations of the estimated transfer functions.
Chapter
Isolated research has recently unveiled potential new noise-related illnesses related with low-frequency noise and infrasound exposure, most notably vibroacoustic disease, but these links have not as yet been accepted by the overall medical community. This chapter summarizes the most recent research in these areas. It discusses body resonance and damage potential, hearing loss, cardiovascular disease, and vibroacoustic disease (VAD). The chapter also presents a summary of people's loss of sound perception from other agents. Studies relating cardiovascular disease and noise exposure are mostly divided into two subject categories, hypertension or high blood pressure, and ischemic diseases or blood flow restrictions. Although it can be argued that the link between cardiovascular disease and noise is stress-related and therefore psychologically based, the discussion is being included with physiological effects, as it is in most of the literature. Low- frequency noise and infrasound concerns are also discussed.
Article
We have cyclically suppressed the 2f1-f2 distortion product otoacoustic emission (DPOAE) with low-frequency tones (17-97 Hz) as a way of differentially diagnosing the endolymphatic hydrops assumed to be associated with Ménière's syndrome. Round-window electrocochleography (ECochG) was performed in subjects with sensorineural hearing loss (SNHL) on the day of DPOAE testing, and from which the amplitude of the summating potential (SP) was measured, to support the diagnosis of Ménière's syndrome based on symptoms. To summarize and compare the cyclic patterns of DPOAE modulation in these groups we have used the simplest model of DPOAE generation and modulation, by assuming that the DPOAEs were generated by a 1st-order Boltzmann nonlinearity so that the magnitude of the 2f1-f2 DPOAE resembled the 3rd derivative of the Boltzmann function. We have also assumed that the modulation of the DPOAEs by the low-frequency tones was simply due to a sinusoidal change in the operating point on the Boltzmann nonlinearity. We have found the cyclic DPOAE modulation to be different in subjects with Ménière's syndrome (n = 16) when compared to the patterns in normal subjects (n = 16) and in other control subjects with non-Ménière's SNHL and/or vestibular disorders (n = 13). The DPOAEs of normal and non-Ménière's ears were suppressed more during negative ear canal pressure than during positive ear canal pressure. By contrast, DPOAE modulation in Ménière's ears with abnormal ECochG was greatest during positive ear canal pressures. This test may provide a tool for diagnosing Ménière's in the early stages, and might be used to investigate the pathological mechanism underlying the hearing symptoms of this syndrome.
Article
Background and Purpose: Low frequency tones (LFT) and Infrasound (IS) are looked upon as potentially hazardous to human health. We aimed at assessing LFT-/IS-induced activation of the auditory cortex by using fMRI. Material and Methods: FMRI was used to investigate LFT/IS perception in 17 healthy female volunteers. Short tone bursts of 12 Hz and 500 Hz were delivered directly into the right external ear canal through a 12 m long silicone tube and an ear-plug. Sound pressure levels and spectral analysis of the stimuli and scanner noise were measured in-situ by using a metal-free optical microphone and a fibre-optic cable. Results: Level-dependent activation of the superior temporal gyrus, i.e. Brodmann areas (BA) 41 and 42 as well as BA 22 was delineated subsequent to acoustic stimulation with 12 Hz and 500 Hz stimuli. Thresholds for 12 Hz perception were between 110 and 90 dB SPL in normal hearing subjects. Spectral analysis revealed the occurrence of harmonics together with LFT as well as scanner noise, of which 36 Hz harmonics interfered with IS exposure at 12 Hz. Conclusion: Our results provide evidence that auditory cortex activation may be induced by LFT-/IS-exposure, depending on sound pressure levels applied. Clinical implications of our findings will have to be addressed by subsequent studies involving patients presumptively suffering from LFT-dependent disorders.
Article
Low-frequency tones (LFT) and infrasound (IS) are looked upon as potentially hazardous to human health. We aimed at assessing LFT/IS-induced activation of the auditory cortex by using fMRI. fMRI was used to investigate LFT/IS perception in 17 healthy volunteers. Short tone bursts of 12, 36, 48 and 500 Hz were delivered directly into the right external ear canal through a 12-m long silicone tube and an ear plug. Sound pressure levels (SPL) and spectral analysis of the stimuli and scanner noise were measured in situ by using a metal-free optical microphone and a fiber-optic cable. SPL-dependent activation of the superior temporal gyrus, i.e. Brodmann areas (BA) 41 and 42 as well as BA 22, was delineated subsequent to acoustic stimulation with 12-, 48- and 500-Hz stimuli. Thresholds for LFT/IS-induced brain activation were between 110 and 90 dB SPL in normal hearing subjects. Spectral analysis revealed the occurrence of harmonics together with LFT, of which 36-Hz harmonics interfered with IS exposure at 12 Hz as well as scanner noise. Our results provide evidence that auditory cortex activation may be induced by LFT/IS exposure, depending on sound pressure levels applied. Clinical implications of our findings will have to be addressed by subsequent studies involving patients presumptively suffering from LFT-dependent disorders.
Article
In our industrialized world, we are surrounded by occupational, recreational, and environmental noise. Very loud noise damages the inner-ear receptors and results in hearing loss, subsequent problems with communication in the presence of background noise, and, potentially, social isolation. There is much less public knowledge about the noise exposure that produces only temporary hearing loss but that in the long term results in hearing problems due to the damage of high-threshold auditory nerve fibers. Early exposures of this kind, such as in neonatal intensive care units, manifest themselves at a later age, sometimes as hearing loss but more often as an auditory processing disorder. There is even less awareness about changes in the auditory brain caused by repetitive daily exposure to the same type of low-level occupational or musical sound. This low-level, but continuous, environmental noise exposure is well known to affect speech understanding, produce non-auditory problems ranging from annoyance and depression to hypertension, and to cause cognitive difficulties. Additionally, internal noise, such as tinnitus, has effects on the brain similar to low-level external noise. Noise and the Brain discusses and provides a synthesis of hte underlying brain mechanisms as well as potential ways to prvent or alleviate these aberrant brain changes caused by noise exposure.
Article
Employees involved in various occupational environments that include vibration machines and any kind of vehicles are adversely subjected to multiple source noise. Thus, the corresponding noise frequencies (and mainly the infrasound ones) present high interest, especially from the viewpoint of sustainability, due to the potential effects on human safety and health (H_S&H) in sustainable engineering projects. Moreover, the occupational safety and health (OSH) visualization (a fact of unveiling the social dimension of sustainability) of occupational workplaces (by evaluating the infrasound and audible noise frequencies generated by diesel engines) could help a safety officer to lessen crucial risk factors in the OSH field and also to protect, more efficiently, the employees by taking the most essential safety measures. This study (i) suggests a technique to determine the infrasound and audible sound frequencies produced due to vibrations of diesel engines, by using biofuels (i.e., sustainable utilization of resources), in order to evaluate potential effects on human safety and health at the workplaces of sustainable engineering projects, and (ii) it ultimately aims to contribute to the improvement of the three “sustainability pillars” (economy, social, and environmental). Therefore, it provides experimental results of the frequency of the noise (regarding the infrasound and audible spectrum) that a diesel motor generates by vibration, in the frame of using different engine rpms (850, 1150, and 2000) and a variety of biofuel mixtures (B20-D80, B40-D60, B60-D40, and B80-D20). The article shows that the fuel blend meaningfully affects the generated noise, and more particularly, the usage of biofuel blends coming from mixing diesel oil with biodiesel (a fact of the emerging environmental dimension of sustainability) can produce various noise frequencies, which are determined in the infrasound and audible spectra (~10–23 Hz). The suggested technique, by ameliorating the OSH situation, doubtless will help enterprises to achieve the finest allocation of limited financial resources (a fact corresponding to the economic dimension of sustainability), allowing financial managers to have more available budget for implementing other risk-reduction projects.
Article
Full-text available
In this paper, a new method is introduced to derive a cochlear transducer function from measuring distortion product otoacoustic emissions (DPOAEs). It is shown that the cubic difference tone (CDT, 2f1-f2) is produced from the odd-order terms of a power series that approximates a nonlinear function characterizing cochlear transduction. Exploring the underlying mathematical formulation, it is found that the CDT is proportional to the third derivative of the transduction function when the primary levels are sufficiently small. DPOAEs were measured from nine gerbils in response to two-tone signals biased by a low-frequency tone with different amplitudes. The CDT magnitude was obtained at the peak regions of the bias tone. The results of the experiment demonstrated that the shape of the CDT magnitudes as a function of bias levels was similar to the absolute value of the third derivative of a sigmoidal function. A second-order Boltzmann function was derived from curve fitting the CDT data with an equation that represents the third derivative of the Boltzmann function. Both the CDT-bias function and the derived nonlinear transducer function showed effects of primary levels. The results of the study indicate that the low-frequency modulated DPOAEs can be used to estimate the cochlear transducer function.
Article
Full-text available
Low-frequency modulation of distortion product otoacoustic emissions (DPOAEs) can be used to estimate a nonlinear transducer function (fTr) of the cochlea. From gerbils, DPOAEs were measured while presenting a high-level bias tone. Within one period of the bias tone, the magnitudes of the cubic difference tone (CDT, 2f1 - f2) demonstrated two similar modulation patterns (MPs) each resembled the absolute value of the third derivative of the fTr. The center peaks of the MPs occurred at positive sound pressures for rising in bias pressure or loading of the cochlear transducer, and more negative pressures while decreasing bias amplitude or unloading. The corresponding fTr revealed a sigmoid-shaped hysteresis loop with counterclockwise traversal. Physiologic indices that characterized the double MP varied with primary level. A Boltzmann-function-based model with negative damping as a feedback component was proposed. The model was able to replicate the experimental results. Model parameters that fit to the CDT data indicated higher transducer gain and more prominent feedback role at lower primary levels. Both physiologic indices and model parameters suggest that the cochlear transducer dynamically changes its gain with input signal level and the nonlinear mechanism is a time-dependent feedback process.
Article
Full-text available
Low frequency noise, the frequency range from about 10 Hz to 200 Hz, has been recognised as a special environmental noise problem, particularly to sensitive people in their homes. Conventional methods of assessing annoyance, typically based on A-weighted equivalent level, are inadequate for low frequency noise and lead to incorrect decisions by regulatory authorities. There have been a large number of laboratory measurements of annoyance by low frequency noise, each with different spectra and levels, making comparisons difficult, but the main conclusions are that annoyance of low frequencies increases rapidly with level. Additionally the A-weighted level underestimates the effects of low frequency noises. There is a possibility of learned aversion to low frequency noise, leading to annoyance and stress which may receive unsympathetic treatment from regulatory authorities. In particular, problems of the Hum often remain unresolved. An approximate estimate is that about 2.5% of the population may have a low frequency threshold which is at least 12 dB more sensitive than the average threshold, corresponding to nearly 1,000,000 persons in the 50-59 year old age group in the EU-15 countries. This is the group which generates many complaints. Low frequency noise specific criteria have been introduced in some countries, but do not deal adequately with fluctuations. Validation of the criteria has been for a limited range of noises and subjects.
Article
Full-text available
The human perception of sound at frequencies below 200 Hz is reviewed. Knowledge about our perception of this frequency range is important, since much of the sound we are exposed to in our everyday environment contains significant energy in this range. Sound at 20-200 Hz is called low-frequency sound, while for sound below 20 Hz the term infrasound is used. The hearing becomes gradually less sensitive for decreasing frequency, but despite the general understanding that infrasound is inaudible, humans can perceive infrasound, if the level is sufficiently high. The ear is the primary organ for sensing infrasound, but at levels somewhat above the hearing threshold it is possible to feel vibrations in various parts of the body. The threshold of hearing is standardized for frequencies down to 20 Hz, but there is a reasonably good agreement between investigations below this frequency. It is not only the sensitivity but also the perceived character of a sound that changes with decreasing frequency. Pure tones become gradually less continuous, the tonal sensation ceases around 20 Hz, and below 10 Hz it is possible to perceive the single cycles of the sound. A sensation of pressure at the eardrums also occurs. The dynamic range of the auditory system decreases with decreasing frequency. This compression can be seen in the equal-loudness-level contours, and it implies that a slight increase in level can change the perceived loudness from barely audible to loud. Combined with the natural spread in thresholds, it may have the effect that a sound, which is inaudible to some people, may be loud to others. Some investigations give evidence of persons with an extraordinary sensitivity in the low and infrasonic frequency range, but further research is needed in order to confirm and explain this phenomenon.
Article
Full-text available
Definitions of infrasound and low-frequency noise are discussed and the fuzzy boundary between them described. Infrasound, in its popular definition as sound below a frequency of 20 Hz, is clearly audible, the hearing threshold having been measured down to 1.5 Hz. The popular concept that sound below 20 Hz is inaudible is not correct. Sources of infrasound are in the range from very low-frequency atmospheric fluctuations up into the lower audio frequencies. These sources include natural occurrences, industrial installations, low-speed machinery, etc. Investigations of complaints of low-frequency noise often fail to measure any significant noise. This has led some complainants to conjecture that their perception arises from non-acoustic sources, such as electromagnetic radiation. Over the past 40 years, infrasound and low-frequency noise have attracted a great deal of adverse publicity on their effects on health, based mainly on media exaggerations and misunderstandings. A result of this has been that the public takes a one-dimensional view of infrasound, concerned only by its presence, whilst ignoring its low levels.
Article
Background The low frequency modulation of distortion product otoacoustic emissions (DPOAEs) is an objective audiometric method that appears to be a useful tool for the diagnosis of endolymphatic hydrops (EH), e.g. in patients with Menière’s disease, or in those who present only some of the symptoms of the disease. Method Low-frequency modulated DPOAEs were registered in 20 patients with unilateral Menière’s disease (13 women and 7 men, aged 40–66 years) and were compared to a control group matched in age and gender. As a diagnostic parameter, the ‘modulation index’ MI=1/2 MS/DM was used (MS or modulation span, being the difference between the maximal and the minimal DPOAE-amplitude, and DM, being the mean of the suppressed stationary DPOAE-amplitude). Results In the patients with unilateral Menière’s disease, MI was lower than in the control group. This difference was highly significant. In 56% of the patients’ contralateral ears MI was lower than the cut off-value and significantly lower than in the control group, but did not differ significantly from the patients’ ipsilateral ears. Conclusion The registration of low-frequency modulated DPOAEs is comparable to the generally applied transtympanic electrocochleography in its diagnostic validity. The method is fast and non-invasive and could be applied to monitor the course of the disease.
Article
By using masking-period patterns (MPP), produced by sinusoidal as well as impulsive maskers with very low frequency components, it could be demonstrated for man that the basilar membrane seems to move in phase for low sinusoids and preserves the waveform for nonsinusoids with low frequency spectrum within the two basal turns of the cochlea. The shape of a MPP is strongly correlated to the second derivative of the time function of the sound pressure at the eardrum for frequencies below 40 Hz, but to the first derivative for frequencies above 40 Hz. This is presumably due to the form of the cross section of the cochlea and the size of the helicotrema in man. Data of many MPP are shown and discussed together with patterns given in former papers on a qualitative and a quantitative level, to the following proposal; the higher peak in MPP belongs to a kind of suppression which would correspond to the displacement of the basilar membrane towards scala tympani, while the lower peak in MPP belongs to the excitation which would correspond to the displacement of the basilar membrane towards scala vestibuli.
Article
Claims that infrasound adversely affects human performance, makes people "drunk," and directly elicits nystagmus, have not been clearly demonstrated in any experimental study. The effects obtained at low intensity levels of 105 to 120 dB, if they can be substantiated at all, have been exaggerated. Recent well-designed studies conducted at higher intensity levels have found no adverse effects of infrasound on reaction time or human equilibrium. The levels at which infrasound becomes a hazard to man are still unknown. However, the hazardous levels are certain to be much higher than have been suggested in some of the literature. The preliminary exposure limits which were proposed several years ago for use in the U.S.A. are still considered safe and adequate based on present knowledge. Caution is necessary in future research because artifacts produced by faulty experimental procedures can suggest genuine psychological or physiological effects.
Article
The phenomenon of otoacoustic emissions is discussed in relation to the question--is the cochlear travelling wave actively enhanced? Distinctions are made between active loss reduction and true amplification, and between different types of emission source. The dynamic control exercised by the cochlea over the level of mechanical activity is analysed. An adaptive mechanism, capable of oscillatory transient behaviour is found both acoustically and psychophysically in response to strong low frequency stimulation. A feedback model is presented and used to predict aspects of emission behaviour.
Article
Intracellular current administration evokes rapid, graded, and bidirectional mechanical responses of isolated outer hair cells from the mammalian inner ear. The cells become shorter in response to depolarizing and longer in response to hyperpolarizing currents in the synaptic end of the cell. The cells respond with either an increase or decrease in length to transcellular alternating current stimulation. The direction of the movement with transcellular stimuli appears to be frequency dependent. Iontophoretic application of acetylcholine to the synaptic end of the cell decreases its length. The microarchitecture of the organ of Corti permits length changes of outer hair cells in a manner that could significantly influence the mechanics of the cochlear partition and thereby contribute to the exquisite sensitivity of mammalian hearing.
Article
The magnitude and phase characteristics of the sound pressure at the eardrum‐to‐cochlear microphonic potential transfer function were measured at low frequencies for four species: cat, chinchilla, guinea pig, and kangaroo rat. The former two and the latter two demonstrated radically different properties in both magnitude and phase response. It is suggested that, since at low frequencies the middle‐ear transfer functions of these four species are similar, the discrepancies are caused by differing acoustic input impedances of the cochleas that are influenced by the physical dimensions of the helicotrema and of the cochlear spiral.
Article
Nineteen human subjects were exposed to repeated three-minute tones in the sound pressure level range from 119 to 144 dB and the frequency range from 2–22 cps. The tones were produced in an acoustic test booth by a piston-cylinder arrangement, driven by a variable speed direct current motor. Eight subjects showed no adverse effects. Temporary threshold shifts (TTS) of 10 to 22 dB in the frequency range from 3 000 to 8 000 cps were observed in the remaining 11 subjects. In addition, the 7 and 12 cps signals produced considerable masking over the frequency range from 100 to 4 000 cps.
Article
A suppression-period pattern' for acoustical responses has been measured in a similar way to the masking-period pattern. A triggered low-frequency masker decreases the acoustical response to a short test signal differently at different presentations within the period of the masker. The two patterns show such close relationships that a masking-period pattern can be developed out of a set of suppression-period patterns: masked threshold is reached with the same parameter conditions under which the acoustic responses diminish. The rules elaborated for masking-period patterns seem to hold for suppression-period patterns as well.
Article
From experiments in animals and investigations in humans it is known that the normally phase-dependent masking of a short stimulus by a low-frequency continuous tone does not occur in the case of endolymphatic hydrops. The recording of the masked threshold of short tone stimuli in a loud tone of 30 Hz is to be evaluated for the clinical diagnostics of Ménière's disease. To this purpose, the main parameters of the measurements (type, frequency, duration of the stimulus, and intensity of the masker) and their effect of phase-dependent masking and pitch-shift are investigated. Stimuli above 2 kHz are masked less than those of lower frequencies. Wide-band stimuli are less useful, since only the low-frequency component of their spectrum is masked. The tone stimuli should be short (1 - 2 ms) in order to make the measurement of the phase dependence more accurate. With increasing masker level the masking at phase 0 degree corresponds to the increase in level, at phase 270 degrees the amount is twice as much. The pitch shift which is perceived in low-tone masking depends on the phase of the stimulus, and on the levels of the stimulus and the masking tone. The use of brain stem recordings in the investigation of phase-dependent low tone masking is problematic since well-synchronizing stimuli with high frequency spectral components are masked poorly.
Article
Acoustic two-tone distortions are generated during non-linear mechanical amplification in the cochlea. Generation of the cubic distortion 2f1-f2 depends on asymmetric components of a non-linear transfer function whereas the difference tone f2-f1 relies on symmetric components. Therefore, a change of the operating point and hence the symmetry of the cochlear amplifier could be strongly reflected in the level of the f2-f1 distortion. To test this hypothesis, low-frequency tones (5 Hz) were used to bias the position of the cochlear partition in the gerbil. Phase-correlated changes of f2-f1 occurred at bias tone levels where there were almost no effects on 2f1-f2. Higher levels of the bias tone induced pronounced changes of both distortions. These results are qualitatively in good agreement with the results of a simulation in which the operating point of a Boltzman function was shifted. This function is similar to those used to describe outer hair cell (OHC) transduction. To influence OHC motility, salicylate was injected. It caused a decrease of the 2f1-f2 level and an increase in the level of f2-f1. Such reciprocal changes of both distortions, again, can be interpreted in terms of a shift of the operating point of the cochlear amplifier along a non-linear transfer characteristic. To directly influence the cochlear amplifier, DC current was injected into the scala media. Large negative currents (> -2 microA) caused a pronounced decrease of 2f1-f2 (> 15 dB) and positive currents had more complex effects with increasing and/or decreasing 2f1-f2 distortion level. The effects were time and primary level dependent. Changes of f2-f1 for DC currents > magnitude of mu 2A were in most cases larger compared to 2f1-f2 and reversed for certain primary levels. The current effects probably result from a combination of changing the endocochlear potential and shifting the operating point along a non-linear transfer function.
Article
Low frequency acoustical biasing of the cochlear partition with 5 Hz tones produces phase correlated changes of the acoustic two-tone distortions 2f1-f2 and f2-f1. Pronounced changes of f2-f1 and only small changes of 2f1-f2 for lower bias tone levels indicate that there is a close relation between changes in the difference tone f2-f1 and changes in the operating point of the cochlear amplifier (Frank and Kössl, 1996). To further investigate this relationship, the cochlear partition was additionally biased by current injection into the scala media of the gerbil. The injection of low frequency (5 Hz) AC currents (max. 1.3 microA) has a similar effect to that caused by low frequency tones in that both produce phase correlated changes of the two distortions (so-called biasing patterns), with stronger effects on f2-f1. For bias tone levels of about 105 dB SPL and current values of 1.3 microA, the effects are approximately of the same size. A change in the f2-f1 biasing pattern that can be found for increasing bias tone levels can also be seen for increasing primary levels. Changing the setpoint of the cochlear amplifier through the injection of DC current into the scala media during acoustical biasing of the cochlear partition produces the same changes of f2-f1 biasing patterns as increasing the primary levels. This indicates that the operating point of the outer hair cells that respond to the primary tones is not only influenced by low frequency biasing stimuli but also by shifts with increasing primary levels.
Article
The 2 f1-f2 distortion product otoacoustic emission (DP) was measured in 20 normal hearing subjects and 15 patients with moderate cochlear hearing loss and compared to the pure-tone hearing threshold, measured with the same probe system at the f2 frequencies. DPs were elicited over a wide primary tone level range between L2 = 20 and 65 dB SPL. With decreasing L2, the L1-L2 primary tone level difference was continuously increased according to L1 = 0.4L2 + 39 dB, to account for differences of the primary tone responses at the f2 place. Above 1.5 kHz, DPs were measurable with that paradigm on average within 10 dB of the average hearing threshold in both subject groups. The growth of the DP was compressive in normal hearing subjects, with strong saturation at moderate primary tone levels. In cases of cochlear impairment, reductions of the DP level were greatest at lowest, but smallest at highest stimulus levels, such that the growth of the DP became linearized. The correlation of the DP level to the hearing threshold was found to depend on the stimulus level. Maximal correlations were found in impaired ears at moderate primary tone levels around L2 = 45 dB SPL, but at lowest stimulus levels in normal hearing (L2 = 25 dB SPL). At these levels, 17/20 impaired ears and 14/15 normally hearing ears showed statistically significant correlations. It is concluded that for a clinical application and prediction of the hearing threshold, DPs should be measured not only at high, but also at lower primary tone levels.
Article
Low-frequency masking is a recent clinical procedure for the differential diagnosis of sensory hearing loss. Currently this requires the recording of the phase-dependent masked subjective threshold, which is time consuming and not always accurate. As an objective method, the recording of modulated distortion product otoacoustic emissions (DPOAEs) can be performed continuously, and with better frequency specificity. Results of measurements of the low-frequency modulated two-tone DPOAE 2f1-f2 in the human ear, and its dependence on various acoustic parameters, are presented here for the first time. Similar to the masked hearing threshold, the pattern of the phase-dependent modulated DPOAEs displayed two minima, at the phases of maximal rarefaction and condensation, respectively, with a latency of about 4 ms (suppressor frequency 32.8 Hz). The smaller dip, at maximal condensation, appeared only for a high suppressor level, and for a low level of the primary tone f2. The modulating effect measured for the primary frequencies f1 = 2.5 kHz and f2 = 3 kHz, decreased for 4 and 4.8 kHz, and vanished for 5 and 6 kHz. The results are discussed using a cubic distortion model based on the Boltzmann function for mechano-electrical transduction of the hair cells. The saturation behavior of the increase of the DPOAE level at different phases is compared with the growth rates of the DPOAE level in normal hearing and in sensory hearing loss.
Article
Previous studies described a systematic asymmetry of the level of the 2f(1)-f(2) distortion product otoacoustic emission (DP) in the space of the primary tones levels L(1) and L(2) in normal-hearing humans. Optimal primary tone level separations L(1)-L(2), which result in maximum DP levels, were close to L(1)=L(2) at high levels, but continuously increased with decreasing stimulus level towards L(1)>L(2) (Gaskill and Brown, 1990, J. Acoust. Soc. Am. 88, 821-839). At these optimal L(1)-L(2), however, not only DP levels in normal hearing were maximal, but also trauma-induced DP reductions. A linear equation that approximates optimal L(1)-L(2) level separations thus was suggested to be optimum for use in clinical applications (Whitehead et al., 1995, J. Acoust. Soc. Am. 97, 2359-2377). It was the aim of this study to extend the generality of optimal L(1)-L(2) separations to the typical human test frequency range for f(2) frequencies between 1 and 8 kHz. DPs were measured in 22 normal-hearing human ears at 61 primary tone level combinations, with L(2) between 5 and 65 dB SPL and L(1) between 30 and 70 dB SPL (f(2)/f(1)=1.2). It was found that the systematic dependence of the maximum DP level on the L(1)-L(2) separation is independent on frequency. Optimal L(1)-L(2) level separations may well be approximated by a linear equation L(1)=a L(2)+(1-a) b (after Whitehead et al., 1995) with parameters a=0.4 and b=70 dB SPL at f(2) frequencies between 1 and 8 kHz and L(2) levels between 20 and 65 dB SPL. Below L(2)=20 dB SPL, the optimal L(1) was found to be almost constant. Following previous notions (Gaskill and Brown, 1990), an analysis of basilar membrane response data in experimental animals (after Ruggero and Rich, 1991, Hear. Res. 51, 215-230) is further presented that relates optimal L(1)-L(2) separations to frequency-selective compression of the basilar membrane. Based on the assumption that optimal conditions for the DP generation are equal primary tone responses at the f(2) place, a linear increase of the optimal L(1)-L(2) level separation is graphically demonstrated, similar to our results in human ears.
Article
Hypersensitivity to sound is a common description of distinct nosological phenomena of peripheral and central hearing disorders, which are characterized by intense suffering from the acoustic environment. One can distinguish between recruitment accompanying inner ear hearing loss, hyperacusis with a general hypersensitivity to sound of any frequency, and phonophobia as an anxious sensitivity towards specific sound largely independent of its volume. While recruitment can be described as a peripheral reaction caused by a lack of outer hair cell moderation, hyperacusis and phonophobia represent disturbances of central auditory processing without peripheral pathology, often combined with psychosomatic reactions. Due to insufficient efferent inhibition, hyperacusis often follows psychovegetative exhaustion. In cases of phonophobia, peripheral and efferent hearing functions are usually intact, but certain learning (conditioning) processes lead to development of specific reactions and avoidance patterns to certain content-related acoustic stimuli. This article describes those different phenomena with regard to their clinical appearance, diagnostics, and possibilities for therapy.
Article
DPOAE temporary level shift (TLS) at 2f(1)-f(2) and f(2)-f(1), ABR temporary threshold shift (TTS), and detailed histopathological findings were compared in three groups of chinchillas that were exposed for 24 h to an octave band of noise (OBN) centered at 4 kHz with a sound pressure level (SPL) of 80, 86 or 92 dB (n=3,4,6). DPOAE levels at 39 frequencies from f(1)=0.3 to 16 kHz (f(2)/f(1)=1.23; L(2) and L(1)=55, 65 and 75 dB, equal and differing by 10 dB) and ABR thresholds at 13 frequencies from 0.5 to 20 kHz were collected pre- and immediately post-exposure. The functional data were converted to pre- minus post-exposure shift and overlaid upon the cytocochleogram of cochlear damage using the frequency-place map for the chinchilla. The magnitude and frequency place of components in the 2f(1)-f(2) TLS patterns were determined and group averages for each OBN SPL and L(1), L(2) combination were calculated. The f(2)-f(1) TLS was also examined in ears with focal lesions equal to or greater than 0.4 mm. The 2f(1)-f(2) TLS (plotted at f(1)) and TTS aligned with the extent and location of damaged supporting cells. The TLS patterns over frequency had two features which were unexpected: (1) a peak at about a half octave above the center of the OBN with a valley just above and below it and (2) a peak (often showing enhancement) at the apical boundary of the supporting-cell damage. The magnitudes of the TLS and TTS generally increased with increasing SPL of the exposure. The peaks of the TLS and TTS, as well as the peaks and valleys of the TLS pattern moved apically as the SPL of the OBN was increased. However, there was little consistency in the pattern relations with differing L(1), L(2) combinations. In addition, neither the 2f(1)-f(2) nor f(2)-f(1) TLS for any L(1), L(2) combination reliably detected focal lesions (100% OHC loss) from 0.4 to 1.2 mm in size. Often, the TLS went in the opposite direction from what would be expected at focal lesions. Recovery from TLS and TTS was also examined in seven animals. Both TLS and TTS recovered partially or completely, the magnitude depending upon exposure SPL.
Article
Extrapolated DPOAE growth functions can be applied in ENT diagnostics for a specific assessment of cochlear dysfunction. In screening newborn hearing, they are able to detect transitory sound conductive hearing loss and thus help to reduce the rate of false positive TEOAE responses in the early postnatal period. Since DPOAE growth functions are correlated with loudness functions, DPOAEs offer the potential for basic hearing aid adjustment, especially in children. Extrapolated DPOAE I/O-functions provide a tool for a fast, automated frequency-specific and quantitative evaluation of hearing loss. However, DPOAE diagnostics is limited to a hearing loss of 50 dB HL. Thus, a combined measurement of DPOAE and AMFR would be useful.
Article
The low frequency modulation of distortion product otoacoustic emissions (DPOAEs) is an objective audiometric method that appears to be a useful tool for the diagnosis of endolymphatic hydrops (EH), e.g. in patients with Menière's disease, or in those who present only some of the symptoms of the disease. Low-frequency modulated DPOAEs were registered in 20 patients with unilateral Menière's disease (13 women and 7 men, aged 40-66 years) and were compared to a control group matched in age and gender. As a diagnostic parameter, the 'modulation index' MI=1/2 MS/DM was used (MS or modulation span, being the difference between the maximal and the minimal DPOAE-amplitude, and DM, being the mean of the suppressed stationary DPOAE-amplitude). In the patients with unilateral Menière's disease, MI was lower than in the control group. This difference was highly significant. In 56% of the patients' contralateral ears MI was lower than the cut off-value and significantly lower than in the control group, but did not differ significantly from the patients' ipsilateral ears. The registration of low-frequency modulated DPOAEs is comparable to the generally applied transtympanic electrocochleography in its diagnostic validity. The method is fast and non-invasive and could be applied to monitor the course of the disease.
Article
Distortion product otoacoustic emissions (DPOAEs) are generated from the nonlinear transduction n cochlear outer hair cells. The transducer function demonstrating a compressive nonlinearity can be estimated from low-frequency modulation of DPOAEs. Experimental results from the gerbils showed that the magnitude of quadratic difference tone (QDT, f2-f1) was either enhanced or suppressed depending on the phase of the low-frequency bias tone. Within one period of the bias tone, QDT magnitudes exhibited two similar modulation patterns, each resembling the absolute value of the second derivative of the transducer function. In the time domain, the center notches of the modulation patterns occurred around the zero crossings of the bias pressure, whereas peaks corresponded to the increase or decrease in bias pressure. Evaluated with respect to the bias pressure, modulated QDT magnitude displayed a double-modulation pattern marked by a separation of the center notches. Loading/unloading of the cochlear transducer or rise/fall in bias pressure shifted the center notch to positive or negative sound pressures, indicating a mechanical hysteresis. These results suggest that QDT arises from the compression that coexists with the active hysteresis in cochlear transduction. Modulation of QDT magnitude reflects the dynamic regulation of cochlear transducer gain and compression.
Article
Based on a real case effects of long-term exposure of infrasound on man are outlined. Beside a description of the background of the case together with remarks on the occurred health problems, the main view lies on the proceeding in identifying the special kind of exposure just as possible technical causes. As a source of annoyance a small heating plant was identified, which immitted into the house of the exposed people very low frequency airborne sound far below the common hearing thresholds. The results show clearly the general deficit of research on the effects of low level infrasound on man.
Article
Distortion product otoacoustic emissions (DPOAEs) were recorded from guinea pigs in response to simultaneous increases in the levels of high frequency primary tones in the presence of a low frequency biasing tone of 30 Hz at 120 dB SPL. The DPOAE amplitudes plotted as functions of the biasing tone phase angle show distinctive repeatable minima, which are identical to the amplitude notches observed for the distortion products at the output of a single saturating non-linearity. The number of the amplitude minima grows with increasing order of the DPOAE, a feature that is also reproduced by the model. The model of DPOAE generation due to a single saturating non-linearity does not explain the experimentally observed asymmetry of the response of the DPOAEs to rising and falling half cycles of the biasing tone. This asymmetry is attributed to a hypothetical mechanism, which adjusts the operating point of the outer hair cell's mechanoelectrical transducer. Experimental data were consistent with a hypothesis that, for the parameters of stimulation used in this study, both lower and upper sideband DPOAEs are dominated by emission generated from a single and spatially localized place in the cochlea.
Article
Biasing of the cochlear partition with a low-frequency tone can produce an amplitude modulation of distortion product otoacoustic emissions (DPOAEs) in gerbils. In the time domain, odd- versus even-order DPOAEs demonstrated different modulation patterns depending on the bias tone phase. In the frequency domain, multiple sidebands are presented on either side of each DPOAE component. These sidebands were located at harmonic multiples of the biasing frequency from the DPOAE component. For odd-order DPOAEs, sidebands at the even-multiples of the biasing frequency were enhanced, while for even-order DPOAEs, the sidebands at the odd-multiples were elevated. When a modulation in DPOAE magnitude was presented, the magnitudes of the sidebands were enhanced and even greater than the DPOAEs. The amplitudes of these sidebands varied with the levels of the bias tone and two primary tones. The results indicate that the maximal amplitude modulations of DPOAEs occur at a confined bias and primary level space. This can provide a guide for optimal selections of signal conditions for better recordings of low-frequency modulated DPOAEs in future research and applications. Spectral fine-structure and its unique relation to the DPOAE modulation pattern may be useful for direct acquisition of cochlear transducer nonlinearity from a simple spectral analysis.
Article
Distortion product otoacoustic emission (DPOAE) growth functions reflect the active nonlinear cochlear sound processing when using a primary-tone setting which accounts for the different compressions of the two primaries at the DPOAE generation site and hence provide a measure for objectively assessing cochlear sensitivity and compression. DPOAE thresholds can be derived from extrapolated DPOAE input/output (I/O) functions independently of the noise floor and consequently can serve as a unique measure for reading DPOAE measurements. The thus-estimated DPOAE thresholds exhibit a close correspondence to behavior audiometric thresholds and thus can be used for reconstructing an audiogram, i.e., a DPOAE audiogram. The DPOAE I/O functions' slope increases with cochlear hearing loss and thus provides a measure for assessing recruitment. Hence, DPOAE I/O functions can give more information for diagnostic purposes than those of DP grams, transiently evoked OAEs (TEOAEs), or auditory brain stem responses (ABRs). DPOAE audiograms can be applied in pediatric audiology to assess cochlear dysfunction in a couple of minutes. In newborn hearing screening, they are able to detect transitory sound-conductive hearing loss and thus can help to reduce the rate of false-positive TEOAE responses in the early postnatal period. Since DPOAE I/O functions are correlated with loudness functions, DPOAEs offer the possibility of basic hearing aid adjustments, especially in infants and children. Extrapolated DPOAE I/O functions provide a tool for a fast automated frequency-specific and quantitative evaluation of hearing loss.
Article
Infrasound (i.e., <20 Hz for humans; <100 Hz for chinchillas) is not audible, but exposure to high-levels of infrasound will produce large movements of cochlear fluids. We speculated that high-level infrasound might bias the basilar membrane and perhaps be able to minimize noise-induced hearing loss. Chinchillas were simultaneously exposed to a 30 Hz tone at 100 dB SPL and a 4 kHz OBN at either 108 dB SPL for 1.75 h or 86 dB SPL for 24h. For each animal, the tympanic membrane (TM) in one ear was perforated ( approximately 1 mm(2)) prior to exposure to attenuate infrasound transmission to that cochlea by about 50 dB SPL. Controls included animals that were exposed to the infrasound only or the 4 kHz OBN only. ABR threshold shifts (TSs) and DPOAE level shifts (LSs) were determined pre- and post-TM-perforation and immediately post-exposure, just before cochlear fixation. The cochleae were dehydrated, embedded in plastic, and dissected into flat preparations of the organ of Corti (OC). Each dissected segment was evaluated for losses of inner hair cells (IHCs) and outer hair cells (OHCs). For each chinchilla, the magnitude and pattern of functional and hair cell losses were compared between their right and left cochleae. The TM perforation produced no ABR TS across frequency but did produce a 10-21 dB DPOAE LS from 0.6 to 2 kHz. The infrasound exposure alone resulted in a 10-20 dB ABR TS at and below 2 kHz, no DPOAE LS and no IHC or OHC losses. Exposure to the 4 kHz OBN alone at 108 dB produced a 10-50 dB ABR TS for 0.5-12 kHz, a 10-60 dB DPOAE LS for 0.6-16 kHz and severe OHC loss in the middle of the first turn. When infrasound was present during exposure to the 4 kHz OBN at 108 dB, the functional losses and OHC losses extended much further toward the apical and basal tips of the OC than in cochleae exposed to the 4 kHz OBN alone. Exposure to only the 4 kHz OBN at 86 dB produces a 10-40 dB ABR TS for 3-12 kHz and 10-30 dB DPOAE LS for 3-8 kHz but little or no OHC loss in the middle of the first turn. No differences were found in the functional and hair-cell losses from exposure to the 4 kHz OBN at 86 dB in the presence or absence of infrasound. We hypothesize that exposure to infrasound and an intense 4 kHz OBN increases cochlear damage because the large fluid movements from infrasound cause more intermixing of cochlear fluids through the damaged reticular lamina. Simultaneous infrasound and a moderate 4 kHz OBN did not increase cochlear damage because the reticular lamina rarely breaks down during this moderate level exposure.
Article
Previous physiological studies investigating the transfer of low-frequency sound into the cochlea have been invasive. Predictions about the human cochlea are based on anatomical similarities with animal cochleae but no direct comparison has been possible. This paper presents a noninvasive method of observing low frequency cochlear vibration using distortion product otoacoustic emissions (DPOAE) modulated by low-frequency tones. For various frequencies (15-480 Hz), the level was adjusted to maintain an equal DPOAE-modulation depth, interpreted as a constant basilar membrane displacement amplitude. The resulting modulator level curves from four human ears match equal-loudness contours (ISO226:2003) except for an irregularity consisting of a notch and a peak at 45 Hz and 60 Hz, respectively, suggesting a cochlear resonance. This resonator interacts with the middle ear stiffness. The irregularity separates two regions of the middle ear transfer function in humans: A slope of 12 dB/octave below the irregularity suggests mass-controlled impedance resulting from perilymph movement through the helicotrema; a 6-dB/octave slope above the irregularity suggests resistive cochlear impedance and the existence of a traveling wave. The results from four guinea pig ears showed a 6-dB/octave slope on either side of an irregularity around 120 Hz, and agree with published data.
Tinnitus as a cause of low frequency noise complaints
  • Van Den
  • G P Berg
Van den Berg, G.P., 2001. Tinnitus as a cause of low frequency noise complaints. In: Proc. Internoise, den Hague.