Literature Review

Vestibular Perception is Slow: A Review

Article· Literature ReviewinMultisensory research 26(4):387-403 · December 2013with 502 Reads
Cite this publication
Abstract
Multisensory stimuli originating from the same event can be perceived asynchronously due to differential physical and neural delays. The transduction of and physiological responses to vestibular stimulation are extremely fast, suggesting that other stimuli need to be presented prior to vestibular stimulation in order to be perceived as simultaneous. There is, however, a recent and growing body of evidence which indicates that the perceived onset of vestibular stimulation is slow compared to the other senses, such that vestibular stimuli need to be presented prior to other sensory stimuli in order to be perceived synchronously. From a review of this literature it is speculated that this perceived latency of vestibular stimulation may reflect the fact that vestibular stimulation is most often associated with sensory events that occur following head movement, that the vestibular system rarely works alone, that additional computations are required for processing vestibular information, and that the brain prioritizes physiological response to vestibular stimulation over perceptual awareness of stimulation onset. Empirical investigation of these theoretical predictions is encouraged in order to fully understand this surprising result, its implications, and to advance the field.

Do you want to read the rest of this article?

Request Full-text Paper PDF
  • Article
    Full-text available
    This focused review is based on earlier studies which have shown that both children and adults diagnosed as having developmental coordination disorder (DCD), benefited from sensorimotor therapy according to the method Retraining for Balance (RB). Different approaches and assessments for children and adults in regard to DCD are scrutinized and discussed in comparison to RB which mainly includes (a) vestibular assessment and stimulation (b) assessment and integration of aberrant primary reflexes and (c) assessment and stimulation of auditory and visual perception. Earlier results indicate that the process of Sensorimotor therapy using RB techniques could be described according to a conceptual Kinesthetic-Vestibular Developmental Model (KVDM) whereby the training elicited temporary physical and psychological regressions followed by transformations i.e., positive physical and psychological development. We have also seen that this recurring pattern is similar for children and adults. In our conceptual model vestibular stimulation (perceptual priming) stimulates the nervous system, which might enhance object-related priming. This perceptual priming will also assist the suppression of persistent aberrant primary reflexes. In order to develop effective methods for assessment and intervention of DCD over the life span the importance of primary reflex inhibition and vestibular stimulation as well as a combination of bottom-up and top-down approaches have to be considered.
  • Article
    Full-text available
    The present study aimed at investigating the consequences of a massive loss of somatosensory inputs on the perception of spatial orientation. The occurrence of possible compensatory processes for external (i.e., object) orientation perception and self-orientation perception was examined by manipulating visual and/or vestibular cues. To that aim, we compared perceptual responses of a deafferented patient (GL) with respect to age-matched Controls in two tasks involving gravity-related judgments. In the first task, subjects had to align a visual rod with the gravitational vertical (i.e., Subjective Visual Vertical: SVV) when facing a tilted visual frame in a classic Rod-and-Frame Test. In the second task, subjects had to report whether they felt tilted when facing different visuo-postural conditions which consisted in very slow pitch tilts of the body and/or visual surroundings away from vertical. Results showed that, much more than Controls, the deafferented patient was fully dependent on spatial cues issued from the visual frame when judging the SVV. On the other hand, the deafferented patient did not rely at all on visual cues for self-tilt detection. Moreover, the patient never reported any sensation of tilt up to 18° contrary to Controls, hence showing that she did not rely on vestibular (i.e., otoliths) signals for the detection of very slow body tilts either. Overall, this study demonstrates that a massive somatosensory deficit substantially impairs the perception of spatial orientation, and that the use of the remaining sensory inputs available to a deafferented patient differs regarding whether the judgment concerns external vs. self-orientation. © 2016 Bringoux, Scotto Di Cesare, Borel, Macaluso and Sarlegna.
  • Article
    Most existing models of driver steering control do not consider the driver's sensory dynamics, despite many aspects of human sensory perception having been researched extensively. The authors recently reported the development of a driver model that incorporates sensory transfer functions, noise and delays. The present paper reports the experimental identification and validation of this model. An experiment was carried out with five test subjects in a driving simulator, aiming to replicate a real-world driving scenario with no motion scaling. The results of this experiment are used to identify parameter values for the driver model, and the model is found to describe the results of the experiment well. Predicted steering angles match the linear component of measured results with an average ‘variance accounted for’ of 98% using separate parameter sets for each trial, and 93% with a single fixed parameter set. The identified parameter values are compared with results from the literature and are found to be physically plausible, supporting the hypothesis that driver steering control can be predicted using models of human perception and control mechanisms.
  • Article
    Full-text available
    Several studies have investigated whether vestibular signals can be processed to determine the magnitude of passive body motions. Many of them required subjects to report their perceived displacements offline, i.e., after being submitted to passive displacements. Here, we used a protocol that allowed us to complement these results by asking subjects to report their introspective estimation of their displacement continuously, i.e., during the ongoing body rotation. To this end, participants rotated the handle of a manipulandum around a vertical axis to indicate their perceived change of angular position in space at the same time as they were passively rotated in the dark. The rotation acceleration (Acc) and deceleration (Dec) lasted either 1.5 s (peak of 60°/s(2), referred to as being "High") or 3 s (peak of 33°/s(2), referred to as being "Low"). The participants were rotated either counter-clockwise or clockwise, and all combinations of acceleration and deceleration were tested (i.e., AccLow-DecLow; AccLow-DecHigh; AccHigh-DecLow; AccHigh-DecHigh). The participants' perception of body rotation was assessed by computing the gain, i.e., ratio between the amplitude of the perceived rotations (as measured by the rotating manipulandum's handle) and the amplitude of the actual chair rotations. The gain was measured at the end of the rotations, and was also computed separately for the acceleration and deceleration phases. Three salient findings resulted from this experiment: (i) the gain was much greater during body acceleration than during body deceleration, (ii) the gain was greater during High compared to Low accelerations and (iii) the gain measured during the deceleration was influenced by the preceding acceleration (i.e., Low or High). These different effects of the angular stimuli on the perception of body motion can be interpreted in relation to the consequences of body acceleration and deceleration on the vestibular system and on higher-order cognitive processes.
  • Article
    Bilateral intratympanic sodium arsenate injections (100 mg/ml in isotonic saline) in adult male Long Evans rats produced impairments in allocentric navigation using a 12-arm radial maze procedure as well as a motor test battery designed to evaluate vestibular function. In contrast, no impairments in the accuracy or precision of duration reproduction using 20-s and 80-s peak-interval procedures were observed when both target durations were associated with the same lever response, but distinguished by signal modality (e.g., light or sound). In contrast, an ordinal-reproduction procedure with 800, 3,200, and 12,800 ms standards requiring the timing of self-initiated movements during the production phase revealed large impairments in the accuracy and precision of timing for vestibular lesioned rats. These impairments were greater on trials in which self-initiated body movements (e.g., holding down the response lever for a fixed duration) were required without the support of external stimuli signaling the onset and offset of the reproduced duration in contrast to trials in which such external support was provided. The conclusion is that space and time are separable entities and not simply the product of a generalized system, but they can be integrated into a common metric using gravity and self-initiated movement as a reference.
  • Article
    Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.
  • Article
    Full-text available
    Bodily self-consciousness is linked to multisensory integration and is particularly dependent on vestibular perception providing the brain with the main sensory cues about body motion and location in space. Vestibular and visual inputs are permanently balanced and integrated to encode the most optimal representation of the external world and of the observer in the central nervous system. Vection, an illusory self-motion experience induced only by visual stimuli, illustrates the fact that the visual and the vestibular systems share common neural underpinnings and a similar phenomenology. Optokinetic stimulation inducing vection and direct vestibular stimulation induce whole-body motion sensations that can be used to explore multisensory interactions. A failure in visuo-vestibular integration, artificially induced by the methods of cognitive psychology or in pathological conditions, has also been reported to altered own body perception and bodily self-consciousness. The respective contributions of the vestibular and visual systems to bodily self-consciousness amongst other polymodal sensory mechanisms, and the neural correlates of visuo-vestibular convergence, should be better understood. We first performed a neuroimaging study of brain regions where optokinetic and vestibular stimuli converge, using 7T functional magnetic resonance imaging in individual subjects. We identified three main regions of convergence: (1) the depth of supramarginal gyrus or retroinsular cortex, (2) the surface of supramarginal gyrus at the temporo-parietal junction, (3) and the posterior part of middle temporal gyrus and superior temporal sulcus. Then, we aimed to induce the embodiment of an external fake rubber hand through visuo-tactile conflict - the so-called rubber hand illusion paradigm, and studied how this integration is modulated by vection. Subjects experiencing vection in the direction of the rubber hand mislocalised the position of their real hand towards the rubber hand indicating that visuo-vestibular stimuli can enhance visuo-tactile integration. We also investigated if visuo-proprioceptive and tactile integration in peripersonal space could be dynamically updated based on the congruency of visual and proprioceptive feedback. A pair of rubber hands or feet provided visual feedback. Fake and real limbs were crossed or uncrossed. We showed that sensory cues were integrated in peripersonal space, dynamically reshaped but only for hands. Finally, we investigated a rare case of an illusory own body perception in an epileptic patient suffering from multiple daily disembodiments during seizures. Seizures were associated to a focal cortical microdysplasia juxtaposed to a developmental venous anomaly in the left angular gyrus, a brain region known to be important for visuo-vestibular integration and bodily self-consciousness. Our results characterize the inferior parietal lobule as a crucial structure in merging visual, vestibular, tactile and proprioceptive inputs, allowing the emergence of the global and unified experience of being “I.” Multisensory body representation can be reshaped transiently using visual and vestibular signals or in relation to a medical condition affecting the temporo-parietal junction. The integration of visual and vestibular signals, aims to adapt dynamically our internal representations to constant changes occurring in our environment.
  • Article
    Full-text available
    In this paper, we show that differences in reaction times (RT) to self-motion depend not only on the duration of the profile, but also on the actual time course of the acceleration. We previously proposed models that described direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). As these models have the potential to describe RT for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations), we validated these models by measuring RTs in human observers for a direction discrimination task using both translational and rotational motions varying in amplitude, duration and acceleration profile shape in a within-subjects design. In agreement with previous studies, amplitude and duration were found to affect RT, and importantly, we found an influence of the profile shape on RT. The models are able to fit the measured RTs with an accuracy of around 5 ms, and the best-fitting parameters are similar to those found from identifying the models based on threshold measurements. This confirms the validity of the modeling approach and links perceptual thresholds to RT. By establishing a link between vestibular thresholds for self-motion and RT, we show for the first time that RTs to purely inertial motion stimuli can be used as an alternative to threshold measurements for identifying self-motion perception models. This is advantageous, since RT tasks are less challenging for participants and make assessment of vestibular function less fatiguing. Further, our results provide strong evidence that the perceived timing of self-motion stimulation is largely influenced by the response dynamics of the vestibular sensory organs.
  • Article
    Hyper-gravity provides a unique environment to study human perception of orientation. We utilized a long-radius centrifuge to study perception of both static and dynamic whole-body roll tilt in hyper-gravity, across a range of angles, frequencies, and net gravito-inertial levels (referred to as G-levels). While studies of static tilt perception in hyper-gravity have been published, this is the first to measure dynamic tilt perception (i.e. with time-varying canal stimulation) in hyper-gravity using a continuous matching task. In complete darkness, subjects reported their orientation perception using a haptic task, whereby they attempted to align a hand-held bar with their perceived horizontal. Static roll tilt was overestimated in hyper-gravity, with more overestimation at larger angles and higher G levels, across the conditions tested (overestimated by ~35% per additional G-level, p<0.001). As our primary contribution, we show that dynamic roll tilt was also consistently overestimated in hyper gravity (p<0.001) at all angles and frequencies tested, again with more overestimation at higher G-levels. The overestimation was similar to that for static tilts at low angular velocities, but decreased at higher angular velocities (p=0.006), consistent with semicircular canal sensory integration. To match our findings, we propose a modification to a previous Observer-type canal-otolith interaction model. Specifically, our data were better modeled by including the hypothesis that the central nervous system treats otolith stimulation in the utricular plane differently than stimulation out of the utricular plane. This modified model was able to simulate quantitatively both the static and the dynamic roll tilt overestimation in hyper-gravity measured experimentally. Copyright © 2014, Journal of Neurophysiology.
  • Article
    Full-text available
    Galvanic vestibular stimulation (GVS) evokes a perception of rotation, however, very little quantitative data exist on the matter. We performed psychophysical experiments on virtual rotations experienced when bipolar electrical stimulation is applied over the mastoids. We also performed analogous real whole-body yaw rotation experiments, allowing us to compare the frequency response of vestibular perception with (real) and without (virtual) natural mechanical stimulation of the semi-circular canals. To estimate the gain of vestibular perception, we measured direction discrimination thresholds for virtual and real rotations. Real direction discrimination thresholds decreased at higher frequencies, confirming multiple previous studies. Conversely, virtual direction discrimination thresholds increased at higher frequencies, implying low-pass filtering of the virtual perception process occurring potentially anywhere between afferent transduction to cortical responses. To estimate the phase of vestibular perception, participants manually tracked their perceived position during sinusoidal virtual and real kinetic stimulation. For real rotations, perceived velocity was approximately in-phase with actual velocity across all frequencies. Perceived virtual velocity was in-phase with the GVS waveform at low-frequencies (0.05 and 0.1 Hz). As frequency was increased to 1 Hz the phase of perceived velocity advanced relative to the GVS waveform. Therefore, at low-frequencies GVS is interpreted as an angular velocity signal, and at higher frequencies GVS becomes interpreted increasingly as an angular position signal. These estimated gain and phase spectra for vestibular perception are a first step towards generating well-controlled virtual vestibular percepts, an endeavour that may reveal the usefulness of GVS in the areas of clinical assessment, neuroprosthetics, and virtual reality. Copyright © 2009, Journal of Neurophysiology.
  • Article
    Full-text available
    In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver-vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver's sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.
  • Article
    Falling down is a common event that threatens the survival of an organism. Simple, yet sophisticated neural mechanisms allow for rapid detection of a fall as well as the generation of compensatory reflexes designed to prevent a fall. Fall awareness and preventative alerting devices could potentially mitigate the likelihood of a fall, however, relatively little is known about the perceived timing of a fall. Common anecdotal reports suggest that humans often describe distortions in their perception of time with very little recollection of what occurred during the fall. Previous research has also found that the vestibular system is perceptually slow compared to the other senses (45-160ms delay), indicating that vestibular stimuli must occur prior to other sensory stimuli in order for it to be perceived as synchronous. Here we examine whether fall perception is similarly slow. Participants made temporal order judgments identifying whether fall or sound onset came first to measure the point of subjective simultaneity. Results show that fall perception is slow, where the onset of a perturbation has to precede an auditory stimulus by ∼44 ms to appear coincident with the fall. We suggest that the central nervous system's rapid detection and response capabilities are restricted to reflexive behaviour, with conscious awareness of a fall being prioritized less. The additional lead times for detecting perturbation onset constrain possible fall detection and alert systems that have been proposed to inform a user to prevent falls and may also help explain the increased likelihood for fall incidence in the elderly.
  • Article
    A recent review of the literature has indicated that sensory dynamics play an important role in the driver–vehicle steering task, motivating the design of a new driver model incorporating human sensory systems. This paper presents a full derivation of the linear driver model developed in previous work, and extends the model to control a vehicle with nonlinear tyres. Various nonlinear controllers and state estimators are compared with different approximations of the true system dynamics. The model simulation time is found to increase significantly with the complexity of the controller and state estimator. In general the more complex controllers perform best, although with certain vehicle and tyre models linearised controllers perform as well as a full nonlinear optimisation. Various extended Kalman filters give similar results, although the driver's sensory dynamics reduce control performance compared with full state feedback. The new model could be used to design vehicle systems which interact more naturally and safely with a human driver.
  • Article
    Full-text available
    One single bout of exercise can be associated with positive effects on cognition, due to physiological changes associated with muscular activity, increased arousal, and training of cognitive skills during exercise. While the positive effects of life-long physical activity on cognitive ageing are well demonstrated, it is not well established whether one bout of exercise is sufficient to register such benefits in older adults. The aim of this study was to test the effect of one bout of exercise on two cognitive processes essential to daily life and known to decline with ageing: audio-visual perception and immediate memory. Fifty-eight older adults took part in a quasi-experimental design study and were divided into three groups based on their habitual activity (open skill exercise (mean age = 69.65, SD = 5.64), closed skill exercise, N = 18, 94% female; sedentary activity-control group, N = 21, 62% female). They were then tested before and after their activity (duration between 60 and 80 minutes). Results showed improvement in sensitivity in audio-visual perception in the open skill group and improvements in one of the measures of immediate memory in both exercise groups, after controlling for baseline differences including global cognition and health. These findings indicate that immediate benefits for cross-modal perception and memory can be obtained after open skill exercise. However, improvements after closed skill exercise may be limited to memory benefits. Perceptual benefits are likely to be associated with arousal, while memory benefits may be due to the training effects provided by task requirements during exercise. The respective role of qualitative and quantitative differences between these activities in terms of immediate cognitive benefits should be further investigated. Importantly, the present results present the first evidence for a modulation of cross-modal perception by exercise, providing a plausible avenue for rehabilitation of cross-modal perception deficits, which are emerging as a significant contributor to functional decline in ageing.
  • Article
    Full-text available
    Knowing when the head moves is crucial information for the central nervous system to maintain a veridical representation of the self in the world for perception and action. Previous studies have shown that active head movement onset has to precede a sound by approximately 80 ms to be perceived as simultaneous, suggesting that the perceived timing of head movement is slow. However, this research was conducted with closed eyes. Given that visual information is available for most natural head movements, could perceptual delays in head movement onset be related to removing vision? Here, we examined whether visual information affects the perceived timing of active head movement onset. Participants performed a series of temporal order judgment tasks between their active head movement and an auditory tone presented at various stimulus onset asynchronies. Visual information was either absent (eyes closed) or present while either maintaining fixation on an earth or head-fixed target in the dark or in the light. Results show that head movement onset has to precede a sound by ~76 ms with eyes closed confirming previous work. The results also suggest that head movement onset must still precede a sound when fixating targets in the dark with a trend for the head having to move with less lead time with visual information and with the vestibulo-ocular reflex active or suppressed (~70 to 48 ms). Together, these results suggest that the perception of head movement onset is persistently delayed and is not fully resolved even with full field visual input.
  • Article
    A single event can generate asynchronous sensory cues due to variable encoding, transmission, and processing delays. To be interpreted as being associated in time, these cues must occur within a limited time window, referred to as a "temporal binding window" (TBW). We investigated the hypothesis that vestibular deficits could disrupt temporal visual-vestibular integration by determining the relationships between vestibular threshold and TBW in participants with normal vestibular function and with vestibular hypofunction. Vestibular perceptual thresholds to yaw rotation were characterized and compared to the TBWs obtained from participants who judged whether a supra-threshold rotation occurred before or after a brief visual stimulus. Vestibular thresholds ranged from 0.7 deg/s-16.5 deg/s and TBWs ranged from 13.8 to 395 ms. Among all participants, TBW and vestibular thresholds were well correlated (R2=0.674; p<0.001), with vestibular-deficient patients having higher thresholds and wider TBWs. Participants reported that the rotation onset needed to lead the light flash by an average of 80 ms for the visual and vestibular cues to be perceived as occurring simultaneously. The wide TBWs in vestibular-deficient participants compared to normal functioning participants indicate that peripheral sensory loss can lead to abnormal multisensory integration. A reduced ability to temporally combine sensory cues appropriately may provide a novel explanation for some symptoms reported by patients with vestibular deficits. Even among normal functioning participants, a high correlation between TBW and vestibular thresholds was observed suggesting that these perceptual measures are sensitive to small differences in vestibular function.
  • Article
    The central nervous system must determine which sensory events occur at the same time. Actively moving the head corresponds with large changes in the relationship between the observer and the environment, sensorimotor processing, and spatiotemporal perception. Active head movement perception has been shown to be dependent on head movement velocity where participants who move their head fastest require the head to move earlier than comparison stimuli for perceived simultaneity more so 44 than those who move their head slower. Such between-subject results cannot address whether active head movement perception changes with velocity. The present study used a within-subjects design to measure the point of subjective simultaneity (PSS) between active head movement speeds and a comparison sound stimulus to characterize the relationship between the velocity and perception of head movement onset. Our results clearly show that i) head movement perception is faster with faster head movements within-subjects, ii) active head movement onset must still precede the onset of other sensory events (average PSS: -123 ms to -52 ms; median PSS: -42 ms to -100 ms) in order to be perceived as occurring simultaneously even at the fastest speeds (average peak velocity: 76°/s to 257°/s; median peak velocity 72 ms to 257 ms). We conclude that head movement perception is slow, but that this delay is minimized with increased speed. These within-subject results are contrary to previous and present study between-subject results and are in agreement with literature where perception of auditory, visual and vestibular stimulus onset is less delayed with increased stimulus intensity.
  • Article
    In previous work, a new model of driver steering control incorporating sensory dynamics was derived and used to explain the performance of drivers in a simulator with full-scale motion feedback. This paper describes further experiments investigating how drivers steer with conflicts between their visual and vestibular measurements, caused by scaling or filtering the physical motion of the simulator relative to the virtual environment. The predictions of several variations of the new driver model are compared with the measurements to understand how drivers perceive sensory conflicts. Drivers are found to adapt well in general, unless the conflict is large, in which case they ignore the physical motion and rely on visual measurements. Drivers make greater use of physical motion which they rate as being more helpful, achieving a better tracking performance. Sensory measurement noise is shown to be signal-dependent, allowing a single set of parameters to be found to fit the results of all the trials. The model fits measured linear steering behavior with an average "variance accounted for (VAF)" of 86%.
  • Article
    Full-text available
    Integration of cues from multiple sensory channels improves our ability to sense and respond to stimuli. Cues arising from a single event may arrive at the brain asynchronously, requiring them to be "bound" in time. The perceptual asynchrony between vestibular and auditory stimuli has been reported to be several times greater than other stimulus pairs. However, these data were collected using electrically evoked vestibular stimuli, which may not provide similar results to those obtained using actual head rotations. Here, we tested whether auditory stimuli and vestibular stimuli consisting of physiologically relevant mechanical rotations are perceived with asynchronies consistent with other sensory systems. We rotated 14 normal subjects about the earth-vertical axis over a raised-cosine trajectory (0.5 Hz, peak velocity 10 deg/s) while isolated from external noise and light. This trajectory minimized any input from extravestibular sources such as proprioception. An 800-Hz, 10-ms auditory tone was presented at stimulus onset asynchronies ranging from 200 ms before to 700 ms after the onset of motion. After each trial, subjects reported whether the stimuli were "simultaneous" or "not simultaneous." The experiment was repeated, with subjects reporting whether the tone or rotation came first. After correction for the time the rotational stimulus took to reach vestibular perceptual threshold, asynchronies spanned from -41 ms (auditory stimulus leading vestibular) to 91 ms (vestibular stimulus leading auditory). These values are significantly lower than those previously reported for stimulus pairs involving electrically evoked vestibular stimuli and are more consistent with timing relationships between pairs of non-vestibular stimuli.
Literature Review
  • Article
    Full-text available
    In previous research, direction detection thresholds have been measured and successfully modeled by exposing participants to sinusoidal acceleration profiles of different durations. In this paper, we present measurements that reveal differences in thresholds depending not only on the duration of the profile, but also on the actual time course of the acceleration. The measurements are further explained by a model based on a transfer function, which is able to predict direction detection thresholds for all types of acceleration profiles. In order to quantify a participant's ability to detect the direction of motion in the horizontal plane, a four-alternative forced-choice task was implemented. Three types of acceleration profiles (sinusoidal, trapezoidal and triangular) were tested for three different durations (1.5, 2.36 and 5.86 s). To the best of our knowledge, this is the first study which varies both quantities (profile and duration) in a systematic way within a single experiment. The lowest thresholds were found for trapezoidal profiles and the highest for triangular profiles. Simulations for frequencies lower than the ones actually measured predict a change from this behavior: Sinusoidal profiles are predicted to yield the highest thresholds at low frequencies. This qualitative prediction is only possible with a model that is able to predict thresholds for different types of acceleration profiles. Our modeling approach represents an important advancement, because it allows for a more general and accurate description of perceptual thresholds for simple and complex translational motions.
  • Book
    1 Introduction.- 2 Peripheral Morphology.- 3 Biophysics of the Peripheral End Organs.- 4 Mechanoneural Transduction and the Primary Afferent Response.- 5 Labyrinthine Input to the Vestibular Nuclei and Reticular Formation.- 6 The Vestibular System and the Cerebellum.- 7 The Vestibulospinal System.- 8 The Vestibuloocular System.- References.
  • Article
    Full-text available
    This book presents an integrative look at the sense that Aristotle missed. The vestibular system plays a vital role in everyday life, contributing to a surprising range of functions from reflexes to the highest levels of perception and consciousness. This text not only offers a review of the basics sensory transduction, the neurophysiology of peripheral and central pathways, and how vestibular signals are processed in the control of gaze and posture; it moves the discussion forward with its attention to the current research and the field's revolutionary advances, such as the understanding of neural correlates of self-motion and the basis of clinical disorders. In addition, the objective presentation of existing controversies is exciting reading and an extremely important contribution to the text's completeness.
  • Chapter
    During natural movement on the earth’s surface, we are seldom aware of vestibular sensations, and when we do become aware of them, they usually signify an unnatural stimulus or an abnormality of vestibular function. With specially contrived conditions of observation, relationships between acceleratory stimuli and vestibular sensations and perceptions can be described quantitatively and qualitatively. These relationships constitute the primary subject matter of this chapter, but to appreciate this material in relation to daily experiences, we must first consider why vestibular sensations do not typically achieve conscious awareness during natural voluntary head and body movement.
  • Article
    Involuntary physical responses to vestibular stimulation are very fast. The vestibulo-ocular reflex, for example, occurs approximately 20ms after the onset of vestibular stimulation (Lorente de No, 1933, Arch Neurol Psychiat). Despite these fast responses, reaction time (RT) to the perceived onset of vestibular stimulation occurs as late as 438ms after galvanic vestibular stimulation, which is approximately 220ms later than RTs to visual, somatosensory and auditory stimuli (Barnett-Cowan & Harris, 2009, Exp Brain Res). To determine whether RTs to natural vestibular stimulation are also slow, participants in the present study were passively moved forwards by .1178m (single cycle sinusoidal acceleration; 0.75m/s/s peak acceleration) using a Stewart motion platform and were asked to press a button relative to the onset of physical motion. RTs to auditory and visual stimuli were also collected. RTs to physical motion occurred significantly later (>100ms) than RTs to auditory and visual stimuli. Event related potentials (ERPs) were simultaneously recorded where the onset of the vestibular-ERP in both RT and non-RT trials occurred about 200ms or more after stimulus onset while the onset of the auditory- and visual-ERPs occurred less than 100ms after stimulus onset. All stimuli ERPs occurred approximately 135ms prior to RTs. These results provide further evidence that vestibular perception is slow compared to the other senses and that this perceptual latency may be related to latent cortical responses to physical motion.
  • Article
    An experiment was conducted on the University of Toronto Institute for Aerospace Studies Flight Research Simulator to determine the minimum phase lead of pitch motion cues relative to pitch visual cues that can be consistently detected by a human observer. The effects of pitch frequency and amplitude, motion gain, and visual scene complexity on the detection threshold of the phase error between the visual and motion cues was determined. The mean detection threshold of the phase error averaged across all subjects and conditions was 57 deg. Pitch amplitude significantly affected the detection threshold of the phase error. Higher amplitudes led to lower detection thresholds. Motion gain also had a significant effect on the detection threshold of the phase error when the frequency was 0.2 Hz or the visual complexity was low. Higher motion gains led to lower detection thresholds. The frequency had a significant effect on the detection threshold of the phase error when the motion gain was 0.5 or the visual complexity was low. Higher frequencies led to lower detection thresholds. The direction of the frequency effect suggests that subjects perform more like motion-visual phase detectors than motion-visual time delay detectors. The results of the experiment were used to analyze pitch high-pass washout filters. The ana lysis suggests that the break frequency for a second-order washout filter should be lower than 0.13 rad/s and the break frequency for a first-order filter should be lower than 0.2 rad/s to keep the motion-visual phase error below the measured human perception limit.
  • Article
    Vision is believed to dominate our multisensory perception of the world. Here we overturn this established view by showing that auditory information can qualitatively alter the perception of an unambiguous visual stimulus to create a striking visual illusion. Our findings indicate that visual perception can be manipulated by other sensory modalities.
  • Article
    Full-text available
    In this paper, we show that differences in reaction times (RT) to self-motion depend not only on the duration of the profile, but also on the actual time course of the acceleration. We previously proposed models that described direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). As these models have the potential to describe RT for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations), we validated these models by measuring RTs in human observers for a direction discrimination task using both translational and rotational motions varying in amplitude, duration and acceleration profile shape in a within-subjects design. In agreement with previous studies, amplitude and duration were found to affect RT, and importantly, we found an influence of the profile shape on RT. The models are able to fit the measured RTs with an accuracy of around 5 ms, and the best-fitting parameters are similar to those found from identifying the models based on threshold measurements. This confirms the validity of the modeling approach and links perceptual thresholds to RT. By establishing a link between vestibular thresholds for self-motion and RT, we show for the first time that RTs to purely inertial motion stimuli can be used as an alternative to threshold measurements for identifying self-motion perception models. This is advantageous, since RT tasks are less challenging for participants and make assessment of vestibular function less fatiguing. Further, our results provide strong evidence that the perceived timing of self-motion stimulation is largely influenced by the response dynamics of the vestibular sensory organs.
  • Article
    The problem of this study was to determine the reaction time of voluntary response to the perception of passive rotary motion of the body with visual and auditory cues either masked or removed. Results are summarized as follows: (1) The average vestibular reaction time to passive body motion was 0.598 second, and the range from 0.190 to 1.79 second. (2) The reaction time to successive oscillations was reliably slower than the reaction time to discrete movements, probably due to the peculiarities of the vestibular system. (3) The reaction time to the left was 0.002 second slower than the reaction time to the right (.599 to .597), which was not significant. (4) The reaction time for men appeared to be shorter than that for women, though the results are not statistically significant. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
  • Article
    Background While optimally activities are provided at those moments when the individual with profound intellectual and multiple disabilities (PIMD) is focused on the environment' or alert', detailed information about the impact that the design and timing of the activity has on alertness is lacking. Therefore, the aim of the present study is to shed light on the sequential relationship between different stimuli and alertness levels in individuals with PIMD. Method Video observations were conducted for 24 participants during one-on-one interactions with a direct support person in multisensory environments. Time-window sequential analyses were conducted for the 120s following four different stimuli. ResultsFor the different stimuli, different patterns in terms of alertness became apparent. Following visual stimuli, the alertness levels of the individuals with PIMD changed in waves of about 20s from active alert' to passive alert'. While auditory and tactile stimuli led to alert' reactions shortly after the stimulation, alertness levels decreased between seconds 20 and 120. Reactions to vestibular stimuli were only visible after 60s; these were active alert' or withdrawn'. Conclusions The results of the present study show that individuals with PIMD show their reactions to stimuli only slightly, so that waves' might reflect the optimal alertness pattern for learning and development. Consequently, it is especially important that direct support persons follow and stimulate these individual waves' in the activities they provide to their clients.
  • Article
    Full-text available
    BACKGROUND: The rotational vestibulo-ocular reflex (rVOR) generates compensatory eye movements in response to rotational head accelerations. The velocity-storage mechanism (VSM), which is controlled by the vestibulo-cerebellar nodulus and uvula, determines the rVOR time constant. In healthy subjects, it has been suggested that self-motion perception in response to earth-vertical axis rotations depends on the VSM in a similar way as reflexive eye movements. We aimed at further investigating this hypothesis and speculated that if the rVOR and rotational self-motion perception share a common VSM, alteration in the latter, such as those occurring after a loss of the regulatory control by vestibulo-cerebellar structures, would result in similar reflexive and perceptual response changes. We therefore set out to explore both responses in patients with vestibulo-cerebellar degeneration. METHODOLOGY/PRINCIPAL FINDINGS: Reflexive eye movements and perceived rotational velocity were simultaneously recorded in 14 patients with chronic vestibulo-cerebellar degeneration (28-81 yrs) and 12 age-matched healthy subjects (30-72 yrs) after the sudden deceleration (90°/s2) from constant-velocity (90°/s) rotations about the earth-vertical yaw and pitch axes. rVOR and perceived rotational velocity data were analyzed using a two-exponential model with a direct pathway, representing semicircular canal activity, and an indirect pathway, implementing the VSM. We found that VSM time constants of rVOR and perceived rotational velocity co-varied in cerebellar patients and in healthy controls (Pearson correlation coefficient for yaw 0.95; for pitch 0.93, p0.8). CONCLUSIONS/SIGNIFICANCE: Our results confirm that self-motion perception in response to rotational velocity-steps may be controlled by the same velocity storage network that controls reflexive eye movements and that no additional, e.g. cortical, mechanisms are required to explain perceptual dynamics.
  • Article
    Full-text available
    Understanding the dynamics of vestibular perception is important, for example, for improving the realism of motion simulation and virtual reality environments or for diagnosing patients suffering from vestibular problems. Previous research has found a dependence of direction discrimination thresholds for rotational motions on the period length (inverse frequency) of a transient (single cycle) sinusoidal acceleration stimulus. However, self-motion is seldom purely sinusoidal, and up to now, no models have been proposed that take into account non-sinusoidal stimuli for rotational motions. In this work, the influence of both the period length and the specific time course of an inertial stimulus is investigated. Thresholds for three acceleration profile shapes (triangular, sinusoidal, and trapezoidal) were measured for three period lengths (0.3, 1.4, and 6.7 s) in ten participants. A two-alternative forced-choice discrimination task was used where participants had to judge if a yaw rotation around an earth-vertical axis was leftward or rightward. The peak velocity of the stimulus was varied, and the threshold was defined as the stimulus yielding 75 % correct answers. In accordance with previous research, thresholds decreased with shortening period length (from ~2 deg/s for 6.7 s to ~0.8 deg/s for 0.3 s). The peak velocity was the determining factor for discrimination: Different profiles with the same period length have similar velocity thresholds. These measurements were used to fit a novel model based on a description of the firing rate of semi-circular canal neurons. In accordance with previous research, the estimates of the model parameters suggest that velocity storage does not influence perceptual thresholds.
  • Article
    Full-text available
    The perception of simultaneity between auditory and vestibular information is crucially important for maintaining a coherent representation of the acoustic environment whenever the head moves. It has been recently reported, however, that despite having similar transduction latencies, vestibular stimuli are perceived significantly later than auditory stimuli when simultaneously generated. This suggests that perceptual latency of a head movement is longer than a co-occurring sound. However, these studies paired a vestibular stimulation of long duration (~1 s) and of a continuously changing temporal envelope with a brief (10-50 ms) sound pulse. In the present study, the stimuli were matched for temporal envelope duration and shape. Participants judged the temporal order of the two stimuli, the onset of an active head movement and the onset of brief (50 ms) or long (1,400 ms) sounds with a square- or raised-cosine-shaped envelope. Consistent with previous reports, head movement onset had to precede the onset of a brief sound by about 73 ms in order for the stimuli to be perceived as simultaneous. Head movements paired with long square sounds (~100 ms) were not significantly different than brief sounds. Surprisingly, head movements paired with long raised-cosine sound (~115 ms) had to be presented even earlier than brief stimuli. This additional lead time could not be accounted for by differences in the comparison stimulus characteristics (temporal envelope duration and shape). Rather, differences between sound conditions were found to be attributable to variability in the time for head movement to reach peak velocity: the head moved faster when paired with a brief sound. The persistent lead time required for vestibular stimulation provides further evidence that the perceptual latency of vestibular stimulation is greater than the other senses.
  • Conference Paper
    Full-text available
    The present study investigated the feasibility of acquiring electroencephalography (EEG) data during self-motion in human subjects. Subjects performed a visual oddball task - designed to evoke a P3 event-related potential - while being passively moved in the fore-aft direction on a Stewart platform. The results of this study indicate that reliable EEG data can be obtained during self-motion on a Stewart platform: this finding is important for the ecological validity of further research into human motion.
  • Article
    Integration of balance-related cues from the vestibular and other sensory systems requires that they be perceived simultaneously despite arriving asynchronously at the central nervous system. Failure to perform temporal integration of multiple sensory signals represents a novel mechanism to explain symptoms in patients with imbalance. This study tested the ability of normal observers to compensate for sensory asynchronies between vestibular and auditory inputs. Double-blinded experimental design. We performed whole-body rotations about the earth-vertical axis following a raised-cosine trajectory at 0.5 and 1.0 Hz to several peak velocities up to a maximum of 180°/s in five normal subjects. Headphones were used to present a diotic auditory stimulus at various times relative to the onset of the rotation. Subjects were required to indicate which cue occurred first. The vestibular stimulus needed to be presented 61 milliseconds (at a stimulus frequency of 0.5 Hz) and 19 milliseconds (at 1.0 Hz) before the auditory stimulus. Stimuli presented within a window of 300 milliseconds (at 0.5 Hz) to 200 milliseconds (at 1.0 Hz) were judged to be simultaneous. The central nervous system must accommodate for delays in perception of vestibular and other sensory cues. Inaccurate temporal integration of these inputs represents a novel explanation for symptoms of imbalance.
  • Article
    Full-text available
    Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately one-half of VIP cells show significant directional selectivity in response to optic flow, one-half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike the medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing.
  • Article
    Full-text available
    The brain can know about an active head movement even in advance of its execution by means of an efference copy signal. In fact, sensory correlates of active movements appear to be suppressed. Passive disturbances of the head, however, can be detected only by sensory feedback. Might the perceived timing of an active head movement be speeded relative to the perception of a passive movement due to the efferent copy (anticipation hypothesis) or delayed because of sensory suppression (suppression hypothesis)? We compared the perceived timing of active and passive head movement using other sensory events as temporal reference points. Participants made unspeeded temporal order and synchronicity judgments comparing the perceived onset of active and passive head movement with the onset of tactile, auditory and visual stimuli. The comparison stimuli had to be delayed by about 45 ms to appear coincident with passive head movement or by about 80 ms to appear aligned with an active head movement. The slow perceptual reaction to vestibular activation is compatible with our earlier study using galvanic stimulation (Barnett-Cowan and Harris 2009). The unexpected additional delay in processing the timing of an active head movement is compatible with the suppression hypothesis and is discussed in relation to suppression of vestibular signals during self-generated head movement.
  • Article
    Full-text available
    Vestibular responses have been reported in the parietoinsular vestibular cortex (PIVC), the ventral intraparietal area (VIP), and the dorsal medial superior temporal area (MSTd) of macaques. However, differences between areas remain largely unknown, and it is not clear whether there is a hierarchy in cortical vestibular processing. We examine the spatiotemporal characteristics of macaque vestibular responses to translational motion stimuli using both empirical and model-based analyses. Temporal dynamics of direction selectivity were similar across areas, although there was a gradual shift in the time of peak directional tuning, with responses in MSTd typically being delayed by 100-150 ms relative to responses in PIVC (VIP was intermediate). Responses as a function of both stimulus direction and time were fit with a spatiotemporal model consisting of separable spatial and temporal response profiles. Temporal responses were characterized by a Gaussian function of velocity, a weighted sum of velocity and acceleration, or a weighted sum of velocity, acceleration, and position. Velocity and acceleration components contributed most to response dynamics, with a gradual shift from acceleration dominance in PIVC to velocity dominance in MSTd. The position component contributed little to temporal responses overall, but was substantially larger in MSTd than PIVC or VIP. The overall temporal delay in model fits also increased substantially from PIVC to VIP to MSTd. This gradual transformation of temporal responses suggests a hierarchy in cortical vestibular processing, with PIVC being most proximal to the vestibular periphery and MSTd being most distal.
  • Article
    Although falling is a significant problem for older persons, little is understood about its underlying causes. Spatial cognition and balance maintenance rely on the efficient integration of information across the main senses. We investigated general multisensory efficiency in older persons with a history of falls compared to age- and sensory acuity-matched controls and younger adults using a sound-induced flash illusion. Older fallers were as susceptible to the illusion as age-matched, non-fallers or younger adults at a short delay of 70 ms between the auditory and visual stimuli. Both older adult groups were more susceptible to the illusion at longer SOAs than younger adults. However, with increasing delays between the visual and auditory stimuli, older fallers did not show a decline in the frequency at which the illusion was experienced even with delays of up to 270 ms. We argue that this relatively higher susceptibility to the illusion reflects inefficient audio-visual processing in the central nervous system and has important implications for the diagnosis and rehabilitation of falling in older persons.
  • Article
    Full-text available
    Integration of cues from multiple sensory channels improves our ability to sense and respond to stimuli. Cues arising from a single event may arrive at the brain asynchronously, requiring them to be "bound" in time. The perceptual asynchrony between vestibular and auditory stimuli has been reported to be several times greater than other stimulus pairs. However, these data were collected using electrically evoked vestibular stimuli, which may not provide similar results to those obtained using actual head rotations. Here, we tested whether auditory stimuli and vestibular stimuli consisting of physiologically relevant mechanical rotations are perceived with asynchronies consistent with other sensory systems. We rotated 14 normal subjects about the earth-vertical axis over a raised-cosine trajectory (0.5 Hz, peak velocity 10 deg/s) while isolated from external noise and light. This trajectory minimized any input from extravestibular sources such as proprioception. An 800-Hz, 10-ms auditory tone was presented at stimulus onset asynchronies ranging from 200 ms before to 700 ms after the onset of motion. After each trial, subjects reported whether the stimuli were "simultaneous" or "not simultaneous." The experiment was repeated, with subjects reporting whether the tone or rotation came first. After correction for the time the rotational stimulus took to reach vestibular perceptual threshold, asynchronies spanned from -41 ms (auditory stimulus leading vestibular) to 91 ms (vestibular stimulus leading auditory). These values are significantly lower than those previously reported for stimulus pairs involving electrically evoked vestibular stimuli and are more consistent with timing relationships between pairs of non-vestibular stimuli.
  • Article
    Full-text available
    In everyday life, vestibular sensors are activated by both self-generated and externally applied head movements. The ability to distinguish inputs that are a consequence of our own actions (i.e., active motion) from those that result from changes in the external world (i.e., passive or unexpected motion) is essential for perceptual stability and accurate motor control. Recent work has made progress toward understanding how the brain distinguishes between these two kinds of sensory inputs. We have performed a series of experiments in which single-unit recordings were made from vestibular afferents and central neurons in alert macaque monkeys during rotation and translation. Vestibular afferents showed no differences in firing variability or sensitivity during active movements when compared to passive movements. In contrast, the analyses of neuronal firing rates revealed that neurons at the first central stage of vestibular processing (i.e., in the vestibular nuclei) were effectively less sensitive to active motion. Notably, however, this ability to distinguish between active and passive motion was not a general feature of early central processing, but rather was a characteristic of a distinct group of neurons known to contribute to postural control and spatial orientation. Our most recent studies have addressed how vestibular and proprioceptive inputs are integrated in the vestibular cerebellum, a region likely to be involved in generating an internal model of self-motion. We propose that this multimodal integration within the vestibular cerebellum is required for eliminating self-generated vestibular information from the subsequent computation of orientation and posture control at the first central stage of processing.
  • Article
    Full-text available
    Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.
  • Article
    Full-text available
    Different senses have different processing times. Here we measured the perceived timing of galvanic vestibular stimulation (GVS) relative to tactile, visual and auditory stimuli. Simple reaction times for perceived head movement (438 +/- 49 ms) were significantly longer than to touches (245 +/- 14 ms), lights (220 +/- 13 ms), or sounds (197 +/- 13 ms). Temporal order and simultaneity judgments both indicated that GVS had to occur about 160 ms before other stimuli to be perceived as simultaneous with them. This lead was significantly less than the relative timing predicted by reaction time differences compatible with an incomplete tendency to compensate for differences in processing times.
  • Article
    Vertebrate hair cells, the primary receptors of auditory, vestibular and lateral-line organs, occur in epithelia which separate fluids of differing ionic composition. The apical surfaces of hair cells, on which the mechanosensitive hair bundles are situated, face a high-K+ fluid (termed endolymph in the inner ear); the basolateral surfaces instead contact fluid (perilymph or a related substance) of a composition similar to that of other extracellular fluids1-3. The universal occurrence of high-K+ fluid on the apical surfaces of hair cells in vertebrates has been taken as evidence that it is important for the transduction process, in particular that it relates to the ionic specificity4 of the conductance change5 underlying the receptor potential. There is, however, conflicting experimental evidence regarding this specificity. K+ has generally been thought to carry the receptor current, as replacement of endolymph with perilymph in the guinea pig cochlea abolishes the extracellularly recorded microphonic potential6. Yet microphonic potentials, as well as intracellular receptor potentials, have been recorded in other preparations when the apical surfaces of the hair cells faced instead a high-Na+ saline, and thus when the electrochemical gradient for K+ was near zero5,7. Ca2+ has also been proposed to carry the receptor current8, but its concentration is quite low in endolymph3, particularly that of the mammalian cochlea9. We present evidence here that the receptor current in a vertebrate hair cell is carried in vivo by K+, but that the transduction channel is in fact nonspecific, being permeable to Li+, Na+, K+, Rb+, Cs+, Ca2+, and at least one small organic cation.
  • Article
    Full-text available
    Human subjective thresholds and directional sensitivity were investigated as a function of vertical linear acceleration with head erect. A hyperbolic (r=0.94) relation emerged between threshold latency and acceleration magnitude (range 0.005 to 0.06 g). This implies that detection was determined by attainment of a given velocity (21.6+/-2.65 cm/sec) rather than the acceleration magnitude per se. Re-analysis of previous data from horizontal accelerations conducted with head erect and supine revealed similar hyperbolic relations (r=0.98 in both cases) with velocity constants of 22.6+/-1.28 and 32.4+1.96 cm/sec respectively. From these findings it is inferred that with head erect (i.e. normal attitude re gravity) the thresholds to predominantly utricular (horizontal accel.) and saccular (vert. accel.) stimulation were similar (P greater than 0.7). However, with head "supine" the saccular threshold was increased to approx. 1.5Xnormal (P less than 0.001). The results also confirmed a previously reported difficulty in the subjective detection of the direction of vertical movement.
  • Article
    Full-text available
    1. The response to static tilts was studied in peripheral otolith neurons in the barbiturate-anesthetized squirrel monkey (Saimiri sciureus). Each unit was characterized by a functional polarization vector, which defines the axis of greatest sensitivity. A circumstantial criterion was used to assign units to the inferior (IN) or superior (SN) vestibular nerves. The former neurons should innervate the sacculus, the latter mainly the utriculus. Confirming pasting experiments, the polarization vectors for SN units lay near the plane of the utricular macula, those for IN units near the plane of the saccular macula. The polarization vectors for IN units were compared in two groups of animals. In one group, the vestibular nerve was intact; in the other, the superior nerve was sectioned. No differences were found and this, together with other observations, demonstrate that the sacculas in mammals functions mainly (if not solely) as an equilibrium organ. 2. The resting discharge of otolith neurons averages some 60 spikes/s, the sensitivity some 30-40 spikes/s-g. IN units tend to have slightly lower sensitivities than do SN units. IN units with upwardly directed (+Z) vectors have substantially higher resting discharges than do units with downwardly directed (-Z) vectors. The +Z units are also characterized by more linear force-response relations. 3. There is a strong positive relation between the resting discharge and sensitivity of units characterized by regular steady-state discharge patterns. A weaker, but statistically significant, relation is demonstrable for irregular units. It is suggested that the relation seen in regular units is the result of the neurons differing from one anothrer in terms of a receptor bias, a transduction gain, or both. Only the mechanism based on transduction gain is thought to be operative among the population of irregular units. 4. Centrifugal-force trapezoids were used to study the response adaptation to prolonged stimulation. Adaptation was more conspicious in irregular units and was characterized by perstimulus response declines and poststimulus secondary responses. For regular units, adaptive properties were similar during excitatory and inhibitory responses. For irregular units, response declines were larger during excitatory stimuli, secondary responses larger following inhibitory stimuli. Typically, response declines were most rapid at the start of the force plateau. A few units, all of them irregular, exhibited a delayed adaptation with response declines beginning only after a constant force had been maintained for 10-20 s. 5. Excitatory responses of regular units are almost always larger than inhibitory responses. This is so during both the dynamic and static portions of force trapezoids. A similar asymmetry is seen in the dynamic response of irregular units; static response asymmetries of the latter units are more variable.
  • Article
    Full-text available
    The egocentric localization of objects in extrapersonal space requires that the retinal and extraretinal signals specifying the gaze direction be simultaneously processed. The question as to whether the extraretinal signal is of central or peripheral origin is still a matter of controversy, however. Three experiments were carried out to investigate the following hypotheses: 1) that the proprioceptive feedback originating in eye and neck muscles might provide the CNS with some indication about the gaze direction; and 2) that the retinal and proprioceptive extraretinal inputs might be jointly processed depending on whether they are of monocular or binocular origin. Application of low amplitude mechanical vibrations to either the extraocular or neck muscles (or both) of a subject looking monocularly at a small luminous target in darkness resulted in an illusory movement of the target, the direction of which depended on which muscle was stimulated. A slow upward target displacement occurred on vibrating the eye inferior rectus or the neck sterno-cleido-mastoidus muscles, whereas a downward shift was induced when the dorsal neck muscles (trapezius and splenius) were vibrated. The extent of the perceptual effects reported by subjects was measured in an open-loop pointing task in which they were asked to point at the perceived position of the target. These results extend to visually-oriented behavior the role of extraocular and neck proprioceptive inputs previously described in the case of postural regulation, since they clearly show that these messages contribute to specifying the gaze direction. This suggests that the extraretinal signal might include a proprioceptive component.(ABSTRACT TRUNCATED AT 250 WORDS)
  • Article
    This is a review of selected aspects of the history of the vestibular system (J. E. Purkyne, E. Mach, A. Crum-Brown) and of our current understanding of vestibular malfunction in clinical vertigo syndromes. Evidence is presented for a preliminary classification of central vestibular brainstem syndromes according to the three major planes of action of the vestibulo-ocular reflex (VOR): (1) disorders of the VOR in the horizontal (yaw) plane (horizontal nystagmus, pseudo 'vestibular neuritis'); (2) disorders of the VOR in the sagittal (pitch) plane (downbeat nystagmus; upbeat nystagmus); (3) disorders of the VOR in the frontal (roll) plane (ocular tilt reaction; lateropulsion). The pathophysiology of peripheral vestibular disorders is discussed: a specific gravity differential between the cupula fluid and the endolymph (buoyancy mechanism) causes vertigo in benign paroxysmal positioning vertigo and positional alcohol nystagmus. Vestibular neuritis is probably a partial unilateral vestibular paralysis due to viral infection of the superior division of the nerve trunk. The common post-traumatic vertigo is explained by otolith dysfunction secondary to dislodged otoconia resulting in unequal loads on the macula beds and a tonus imbalance between the two otoliths.
  • Article
    Vibration of the posterior muscles of the neck in human subjects induces illusions of displacement and movement of a visual target when there is no visual reference (Biguer et al., 1988). Although illusions of head movement are rarely reported by subjects, when they point to the location of the nose they demonstrate an alteration of the perceived position of the head. The kinaesthetic illusion is in a direction consistent with the visual illusion but is of smaller magnitude.
  • Article
    Thresholds for the detection (at p = 0.75 correct) of the direction of discrete angular movements about a vertical Z axis, having a cosine bell velocity trajectory and a duration of 3.3 s, were determined using an adaptive psychophysical procedure. In 30 subjects the mean threshold for the detection of Z axis stimuli was 1.5 deg.s-1. X and Y axis thresholds of 20 subjects had mean values of 2.04 and 2.07 deg.5(-1), respectively, and were significantly higher than Z axis thresholds. The mean Z axis threshold of 6 subjects, who viewed a visual target fixed to the turntable, was reduced by 8.6 dB over that obtained in darkness. Z axis thresholds were found to increase at 5.9 dB/decade as a function of stimulus duration over the range 0.9 to 20 s. The possible implication of this finding in relation to the dynamics of the sensory system mediating the perception of whole-body angular movement is discussed.
  • Article
    The retinal coordinates of an image are normally insufficient to define the direction of an object in body-centred visual space. Gaze direction, specified by information on the position of eye-in-head and on the position of head-on-torso, is also required. While the source of the eye-in-head signal is controversial, it is clear that proprioceptive signals from neck muscles are sufficient to provide head-on-torso information. Observations by Goodwin et al., beginning in 1972, that vibration of limb muscles modifies proprioception from them, and induces illusory motion and false perception of limb position, suggested this study of the effects of neck muscle vibration on the representation of visual space. Verbal reports, supported by objective measures, revealed that vibration of muscles on one side of the neck induces a visual illusion: contralateral displacement of a small visual target viewed in the dark. Pointing movements towards the target are similarly affected, confirming that the representation of directions in visual space is modified by neck muscle vibration. A second vibration-induced illusion was uncovered when apparent displacement ceased. This is an illusion of pure target motion in the same direction as the previously observed displacement. The magnitudes of both the displacement and pure motion illusions were dependent on vibration amplitude and were unrelated to real or apparent movements of eyes or head. Taken together these observations indicate that vibration of neck muscles can modify independently (1) the central representation of the instantaneous direction of gaze and (2) the signal of the velocity with which this direction is changing.
  • Article
    A projection of the vestibular nerve to the anterior bank of the central sulcus was identified. The zone is small and located within the arm field. Surface positive potentials and negative field potentials in deeper cortical layers were evoked within this field by isolated stimulation of the vestibular nerve. Field potentials after isolated stimulation of the facial and auditory nerves were recorded from distinct cortical locations clearly separate from the vestibular field. The tracks of electrodes which recorded the vestibular negative field potentials were histologically located within area 3a. This cytoarchitectonic area extends from the fundus of the central sulcus onto the cortical surface anterior to this sulcus.
  • Article
    WE wish to describe a new perceptual constancy phenomenon which may be termed auditory distance constancy. If an observer views an event which produces a sound it seems that some compensation is made for the fact that the event will stimulate the eye somewhat before it will stimulate the ear depending on the distance of the event from the observer. For example, the sound of an event 35 m away will arrive at the ear about 100 ms after the light arrives at the eye.
  • Article
    Summary 1. Cortical potentials evoked by electrical stimulation of the vestibular nerve in the Rhesus monkey indicate that the primary receiving area for the vestibular nerve is located in the posterior part of the postcentral gyrus at the base of the intraparietal sulcus between the first and second somatosensory fields, probably in Brodmann's area 2. 2. An origin of the evoked cortical potential from other cranial nerves was excluded by extirpation of the cochlear, facial and intermedius nerves and by vestibular stimulation before and after section of the V, IX, X and XI nerve roots at the brainstem. Only sectioning the vestibular nerve abolished the response. 3. This field partially overlaps the SI cortical region responsive to electrical stimulation of the contralateral median nerve. There is interaction between vestibular and median nerve afferents within the overlap zone. 4. It is postulated that the primary cortical vestibular field contributes information for higher motor regulation and conscious spatial orientation.
  • Article
    Massachusetts Institute of Technology. Dept. of Aeronautics and Astronautics. Thesis. 1965. Sc.D.
  • Article
    In order to establish the relationship between the angular acceleration threshold and the duration of the stimulus presentation, oculogyral illusion thresholds for 10 men were determined on the Ames Man-Carrying Rotation Device. Thresholds were determined for stimulus durations of .5, 1, 1.5, 3, and 6 sec. by a random, double-staircase psychophysical procedure. Mean oculogyral illusion thresholds ranged from .10–/sec2 to .62–/sec2 and varied inversely with the duration of the stimulus. The product of Acceleration * Duration of Stimulus Presentation (at) was found to be a significant increasing linear function of the duration of the stimulation, and not constant as earlier data tended to suggest. Differences between the ordinal presentation of the duration conditions were nonsignificant. The relation of the obtained threshold values and the at values to previous findings is discussed. (18 ref.)
  • Article
    The threshold for rotation about the yaw axis was determined for constant acceleration stimuli as a function of their duration in the range from 3 to 25 s. From the torsion-swing model the following theoretical equation can be derived: $$a_{{\text{thr}}} = {C \mathord{\left/ {\vphantom {C {\left[ {1 - \exp \left( { - {{t_s } \mathord{\left/ {\vphantom {{t_s } {\tau _1 }}} \right. \kern-\nulldelimiterspace} {\tau _1 }}} \right)} \right]}}} \right. \kern-\nulldelimiterspace} {\left[ {1 - \exp \left( { - {{t_s } \mathord{\left/ {\vphantom {{t_s } {\tau _1 }}} \right. \kern-\nulldelimiterspace} {\tau _1 }}} \right)} \right]}}$$ (1), where a thr=acceleration amplitude at threshold, t s =duration of the acceleration, τ1=time constant, C=threshold for very long stimuli. According to this formula the Mulder product (i.e. the product of the threshold acceleration amplitude and the duration of the stimulus) is constant for durations up to 0.3 τ1. The best fit of this theoretical function to the somatosensory data is found for τ1=14.5 s, and C=0.220/s 2. The time within the Mulder product is constant (about 5s) is doubtless due to the mechanics of the semicircular canals. For the oculogyral data a lower value of τ1 is found. We do not have any explanation for this lower value.
  • Article
    According to Einstein's equivalence principle, inertial accelerations during translational motion are physically indistinguishable from gravitational accelerations experienced during tilting movements. Nevertheless, despite ambiguous sensory representation of motion in primary otolith afferents, primate oculomotor responses are appropriately compensatory for the correct translational component of the head movement. The neural computational strategies used by the brain to discriminate the two and to reliably detect translational motion were investigated in the primate vestibulo-ocular system. The experimental protocols consisted of either lateral translations, roll tilts, or combined translation-tilt paradigms. Results using both steady-state sinusoidal and transient motion profiles in darkness or near target viewing demonstrated that semicircular canal signals are necessary sensory cues for the discrimination between different sources of linear acceleration. When the semicircular canals were inactivated, horizontal eye movements (appropriate for translational motion) could no longer be correlated with head translation. Instead, translational eye movements totally reflected the erroneous primary otolith afferent signals and were correlated with the resultant acceleration, regardless of whether it resulted from translation or tilt. Therefore, at least for frequencies in which the vestibulo-ocular reflex is important for gaze stabilization (>0.1 Hz), the oculomotor system discriminates between head translation and tilt primarily by sensory integration mechanisms rather than frequency segregation of otolith afferent information. Nonlinear neural computational schemes are proposed in which not only linear acceleration information from the otolith receptors but also angular velocity signals from the semicircular canals are simultaneously used by the brain to correctly estimate the source of linear acceleration and to elicit appropriate oculomotor responses.
  • Article
    There is considerable evidence from studies on cats and monkeys that several cortical areas such as area 2v at the tip of the intraparietal sulcus, area 3av in the sulcus centralis, the parietoinsular vestibular cortex adjacent to the posterior insula (PIVC) and area 7 in the inferior parietal lobule are involved in the processing of vestibular information. Microelectrode recordings from these areas have shown that: (1) most of these cortical neurons are connected trisynaptically to the labyrinthine endorgans and (2) they receive converging vestibular, visual and somatosensory inputs. These data suggest that a multimodal cortical system is involved in postural and gaze control. In humans, recent positron emission tomography (PET) scans and functional magnetic resonance imaging (fMRI) studies have largely confirmed these data. However, because of the limited temporal resolution of these two methods, the minimum time of arrival of labyrinthine inputs from the vestibular hair cells to these cortical areas has not yet been determined. In this study, we used the evoked potential method to attempt to answer this question. Due to its excellent temporal resolution, this method is ideal for the investigation of the tri- or polysynaptic nature of the vestibulocortical pathways. Eleven volunteer patients, who underwent a vestibular neurectomy due to intractable Meniere's disease (MD) or acoustic neurinoma resection, were included in this experiment. Patients were anesthetized and the vestibular nerve was electrically stimulated. The evoked potentials were recorded by 30 subcutaneous active electrodes located on the scalp. The brain electrical source imaging (BESA) program (version 2.0, 1995) was used to calculate dipole sources. The latency period for the activation of five distinct cortical zones, including the prefrontal and/or the frontal lobe, the ipsilateral temporoparietal cortex, the anterior portion of the supplementary motor area (SMA) and the contralateral parietal cortex, was 6 ms. The short latency period recorded for each of these areas indicates that several trisynaptic pathways, passing through the vestibular nuclei and the thalamic neurons, link the primary vestibular afferents to the cortex. We suggest that all these areas, including the prefrontal area, process egomotion information and may be involved in planning motor synergies to counteract loss of equilibrium.
  • Article
    Full-text available
    The firing behavior of 51 non-eye movement related central vestibular neurons that were sensitive to passive head rotation in the plane of the horizontal semicircular canal was studied in three squirrel monkeys whose heads were free to move in the horizontal plane. Unit sensitivity to active head movements during spontaneous gaze saccades was compared with sensitivity to passive head rotation. Most units (29/35 tested) were activated at monosynaptic latencies following electrical stimulation of the ipsilateral vestibular nerve. Nine were vestibulo-spinal units that were antidromically activated following electrical stimulation of the ventromedial funiculi of the spinal cord at C1. All of the units were less sensitive to active head movements than to passive whole body rotation. In the majority of cells (37/51, 73%), including all nine identified vestibulo-spinal units, the vestibular signals related to active head movements were canceled. The remaining units (n = 14, 27%) were sensitive to active head movements, but their responses were attenuated by 20-75%. Most units were nearly as sensitive to passive head-on-trunk rotation as they were to whole body rotation; this suggests that vestibular signals related to active head movements were cancelled primarily by subtraction of a head movement efference copy signal. The sensitivity of most units to passive whole body rotation was unchanged during gaze saccades. A fundamental feature of sensory processing is the ability to distinguish between self-generated and externally induced sensory events. Our observations suggest that the distinction is made at an early stage of processing in the vestibular system.
  • Article
    Full-text available
    There is considerable evidence from studies on cats and monkeys that several cortical areas such as area 2v at the tip of the intraparietal sulcus, area 3av in the sulcus centralis, the parietoinsular vestibular cortex adjacent to the posterior insula (PIVC) and area 7 in the inferior parietal lobule are involved in the processing of vestibular information. Microelectrode recordings from these areas have shown that: (1) most of these cortical neurons are connected trisynaptically to the labyrinthine endorgans and (2) they receive converging vestibular, visual and somatosensory inputs. These data suggest that a multimodal cortical system is involved in postural and gaze control. In humans, recent positron emission tomography (PET) scans and functional magnetic resonance imaging (fMRI) studies have largely confirmed these data. However, because of the limited temporal resolution of these two methods, the minimum time of arrival of labyrinthine inputs from the vestibular hair cells to these cortical areas has not yet been determined. In this study, we used the evoked potential method to attempt to answer this question. Due to its excellent temporal resolution, this method is ideal for the investigation of the tri- or polysynaptic nature of the vestibulocortical pathways. Eleven volunteer patients, who underwent a vestibular neurectomy due to intractable Meniere's disease (MD) or acoustic neurinoma resection, were included in this experiment. Patients were anesthetized and the vestibular nerve was electrically stimulated. The evoked potentials were recorded by 30 subcutaneous active electrodes located on the scalp. The brain electrical source imaging (BESA) program (version 2.0, 1995) was used to calculate dipole sources. The latency period for the activation of five distinct cortical zones, including the prefrontal and/or the frontal lobe, the ipsilateral temporoparietal cortex, the anterior portion of the supplementary motor area (SMA) and the contralateral parietal cortex, was 6 ms. The short latency period recorded for each of these areas indicates that several trisynaptic pathways, passing through the vestibular nuclei and the thalamic neurons, link the primary vestibular afferents to the cortex. We suggest that all these areas, including the prefrontal area, process egomotion information and may be involved in planning motor synergies to counteract loss of equilibrium.