Conference PaperPDF Available

Human Orientation Perception during Transitions in the Presence of Visual Cues

Authors:
978-1-6654-9032-0/23/$31.00 ©2023 IEEE
Human Orientation Perception during Transitions in the
Presence of Visual Cues
Abstract Vestibular cues are critical for human perception of
self-motion and when available visual cues influence these
perceptions. However, it is poorly understood how perception of
self-motion is affected by transitions in the presence of visual
cues (e.g., when visual cues become present, such as when a pilot
flies out of the clouds). To investigate this, 11 subjects (3 female,
mean 25 ± 4 years) were seated and rotated about an Earth-
vertical yaw axis, while asked to report their perception of
angular rotation by pressing a left/right button every time they
felt like they had rotated 90 degrees to the left/right. A head
mounted display provided visual rotation cues (specifically
angular velocity cues only). When present, visual cues were
always congruent with inertial rotation. We used 4 different
visual cue conditions: no visual cues, visual angular velocity
cues, visual angular velocity cues transitioning to no visual cues,
and no visual cues transitioning to visual angular velocity cues.
We experimented with 2 different rotation profiles per visual
cue condition. Based on the timing between subject button press
inputs, we inferred their perception of angular velocity.
During and immediately after a sudden loss of visual cues, we
found a transition in perception of angular velocity on the order
of 30 seconds. When visual cues appeared after vestibular cues
had deviated from reality, there was a 10 second delay before
angular velocity perception converged to that associated with
the provided visual angular velocity cues. Both these time
periods are long than expected and longer than the time delay
associated with the psychophysical task.
Our results indicate that the brain does not discard the influence
of past visual motion cues immediately after suddenly losing
visual cues. Suddenly gaining visual cues is also associated with
some time delay in integrating the cues into a central perception
of motion. By quantifying the time course of self-motion
perception following transitions in the presence of visual cues,
we can better understand pilot spatial disorientation. For
example, a terrestrial pilot flying out of clouds or an astronaut
in the final stages of landing on the moon with dust blowback,
each experience a transition in the presence of visual cues
affecting spatial orientation perception.
TABLE OF CONTENTS
1. INTRODUCTION ....................................................... 1
2. METHODS ................................................................ 2
3. DATA PROCESSING ................................................ 4
4. RESULTS .................................................................. 4
5. DISCUSSION ............................................................. 5
6. CONTRIBUTIONS ..................................................... 6
ACKNOWLEDGEMENTS .............................................. 6
REFERENCES .............................................................. 6
BIOGRAPHY ................................................................ 8
1. INTRODUCTION
Spatial orientation in humans refers to our understanding of
where we are in space and our motion [1], [2]. While humans
generally perceive orientation well [3], unusual motions [4],
[5] or environments [6][9] can cause misperception of
orientation, or spatial disorientation. Spatial disorientation in
pilots is considered a leading source of aviation mishaps in
both fixed wing and rotary aircraft [10], [11]. An example of
becoming spatially disoriented is during take-off on a
commercial flight. During take-off, a commercial jet aircraft
must accelerate linearly. Such forward linear acceleration is
indistinguishable (to otoliths of the vestibular system and any
other graviceptors) from pitching backwards and can create a
sensation of pitch tilt while the aircraft is still level on the
runway. In such a scenario, the pilot must refrain from
pitching the aircraft downwards in order to counter the
spurious sensation of backwards tilt elicited by linear
acceleration.
We have a thorough understanding of the functionality and
limitations of the vestibular system [2], [3], [6], [12] and a
general understanding of typical spatial disorientation
illusions which occur when visual cues are unavailable [1],
[10], [13][15]. However, our existing understanding lacks
Jamie Voros
Ann & H.J. Smead Department
of Aerospace Engineering
Sciences
The University of Colorado
Boulder, CO
jamie.voros@colorado.edu
Torin K Clark
Ann & H.J. Smead Department of
Aerospace Engineering
Sciences
The University of Colorado
Boulder, CO
torin.clark@colorado.edu
2
the ability to accurately predict spatial orientation in
scenarios that occur in more operational environments [16].
For example, aviation mishaps are often associated with
scenarios where visual cues rapidly degrade (e.g., flying into
a cloud) despite the presence of flight instruments. This
motivates studying orientation perception in scenarios where
the presence of visual cues suddenly changes. To our
knowledge no existing study has sought to quantify motion
perception in humans when visual cues are suddenly gained
or lost.
Although humans use cues from multiple sensing modalities
in order to perceive orientation [17][21], a large body of
prior work has focused on the integration of only visual and
vestibular cues to predict orientation perception [20][23]. It
has been possible to accurately predict orientation perception
in many scenarios with only visual and vestibular cues [4],
[6], [24][29]. Therefore, the current study quantifies motion
perception during sudden transitions in the availability of
visual cues as a result of vestibular and visual cues only.
2. METHODS
Subjects were seated on a RotoChair which rotates in earth
vertical yaw motion only. Subjects wore a virtual reality head
mounted display (HMD), specifically an HTC Vive Pro with
Wireless Tracking. The experimental hardware is shown in
Figure 1. The Rotochair was set to rumble with a white noise
rumble profile in order to help mask potential tactile cueing.
Similarly, subjects heard auditory white noise via the HTC
Vive Pro’s inbuilt headphones to reduce external auditory
cues. The experiment occurred in a darkroom in order to
account for light leak which is a known limitation of the HTC
Vive Pro HMD.
Figure 1: Model (not actual subject) sitting in Rotochair
with HTC Vive Pro with Wireless Tracking HMD donned.
Subject holding two HTC Vive Controllers used for button
press task. Head restraint to limit head movement during
testing. Testing occurred in a darkroom, the lights are on
(and operator’s chamber door open) for the purposes of
taking the photograph.
Psychophysical Task
Subjects were tasked with pressing a button every time they
felt they had rotated 90 degrees in one direction. Subjects had
a controller (with button) in each hand and could thus also
indicate in which direction they felt they had rotated. If they
felt they had rotated 90 degrees to the right (left) after the last
button press, they were instructed to press the button in their
right (left) hand. The task has been used successfully
previously in existing literature [30], [31]. In order to report
perceptions of starting or stopping rotation, subjects were
instructed to hold down the triggers on the back of the two
3
controllers when stationary and then to release the triggers
once they felt they were in (angular) motion.
Motion Profiles and Visual Cues
We limited motion to Earth-vertical yaw rotation in order to
isolate one sensory processing pathway. Subject’s heads were
fixed in place with a head restraint (as shown in Figure 1)
such that they were always facing the same direction as the
chair. Earth-vertical yaw motion primarily stimulates the
semicircular canals. We used several motion profiles and
ultimately present data from two motion profiles (shown in
Figure 2).
The motion profiles contained different accelerations rotating
to the left and to the right. Each subject (that completed
testing) experienced each motion profile 4 times, each time
with different visual cues. The order in which each subject
experienced each trial was randomized prior to the start of
testing.
Figure 2: Plots to show the two motion profiles used in the
present study. Top: a motion profile designed to trigger a
decay in perception during the no visual cues condition.
Bottom: a motion profile designed to be unpredictable, bi-
directional and require subjects to constantly be
considering their perception of motion (as opposed to
falling into a pattern of pressing the button consistently).
The four different visual cue conditions were as follows:
1. Optical flow delivered by the head mounted display
for the duration of the motion profile
2. No visual cues at all (HMD displaying black)
3. Optical flow for the initial part of the motion and
none for the remainder
4. No visual cues for the initial part of the motion
(HMD displaying black) and optical flow for the
remainder
Optical flow was delivered in the form of dots inside of a
virtual sphere that the subjects was rotating inside of (shown
in Figure 3). The dots moved across the HMD in the opposite
direction that the subject was rotating in order to generate a
sensation of motion. The visual cues delivered were always
congruent with the physical rotation of the subject, as our
goal was not to confuse subjects with erroneous visual
information. We chose to use a large number of identical,
small dots in order to stimulate sensation of angular velocity
only (specifically, we did not provide any angular position
cues).
Figure 3: View inside HMD while visual cues were being
delivered.
Experimental Procedure
This research complied with the American Psychological
Association Code of Ethics and was approved by the
Institutional Review Board at The University of Colorado
Boulder under protocol #19-002. Informed consent was
obtained from each participant. Eleven unique subjects
participated in the study, but not all subjects completed
testing. Subjects were between the ages of 18 and 40 years
because age over 40 is associated with reduced vestibular
functioning [32], [33]. Subjects self-reported no known
history of vestibular dysfunction and self-reported 20-20
vision (corrective contact lenses were accepted, glasses were
not due to the physical constraints of the HMD). Subjects also
completed the motion sickness susceptibility questionnaire
4
and would have been screened out had they scored above the
90% percentile. No subjects were screened out as a result of
their motion sickness susceptibility score. However, two
subjects did not complete testing as a result of cybersickness.
Subjects completed 4 practice trials prior to the beginning of
testing to become familiar with the visual scene and practice
performing the psychophysical task. Subjects then completed
up to 24 test trials (not all data is presented here). Subjects
were asked for their sleepiness level and if they were feeling
motion sick after each trial. If a subject reported feeling
motion sick for 3 trials in a row, their testing session ended.
Subjects were able to take a break (and remove the HMD) at
any point between trials and most did so at least once during
testing.
In order to account for potential differences in perception due
to rotation direction, positive angular velocity was randomly
assigned to represent either clockwise or anticlockwise
rotation. The random assignment of positive angular velocity
representing motion in the clockwise or anticlockwise
direction was done per motion profile (and not per subject or
per trial). This means that each time a subject performed the
task for the same motion profile (with different visual cue
conditions), they rotated in the same direction. However, a
subject may have had positive angular velocity assignment to
anticlockwise for all trials with one motion profile but
positive angular velocity assignment to clockwise for all the
trials with another motion profile.
3. DATA PROCESSING
Figure 4: Example plot showing each individual subject's
perception of angular motion (6 subjects). In green is the
average perception at each instant in time, with light
green as the standard error bounds.
In order to infer perception of angular velocity, we calculated
the time between each button press (or between trigger
releasesignaling the perception of the start of the motion
and the following button press). We divided 90 degrees by
this time to obtain the average angular velocity perception in
degrees per second, over the time period between successive
button presses. We then averaged angular velocity perception
at each instant in time across all subject reports per motion
profile per visual cue condition as shown in Figure 4. A two-
second smoothing filter (Gaussian) was applied to the
resulting average perception. Each visual cue condition per
motion profile had at least six subjects (although we tested 11
unique subjects in total) because not all study participants
completed the full course of testing.
Approximately half of our data had positive angular velocity
associated with clockwise motion and the other half with
anticlockwise motion. In order to collate data where the
assignment of positive angular velocity was not necessarily
in the same direction, we multiplied inferred angular velocity
by -1 where positive angular velocity had been assigned
anticlockwise motion. Qualitatively, we did not see any
differences in subject responses associated with which
direction (clockwise vs anticlockwise) they had completed
the task in.
4. RESULTS
Figure 5: Angular velocity and angular velocity
perception for the two control conditions (visual cues
throughout and no visual cues throughout) on top of
angular velocity perception during test condition (green).
6 subjects. The shaded part of graph indicates where no
visual cues were provided during the test condition. The
light part of the graph shows where visual cues were
provided during the test condition. The sudden transition
happens just before 50 seconds into the motion. Before the
transition, the test condition (green) tracks the visual cues
provided control condition (light yellow). Beyond the
sudden transition, we see the test condition (green) slowly
approach the perception of the no visual cue control
condition (dark blue) over the course of about 30 seconds.
As expected, motion perception in each profile begins by
consistently following the corresponding control condition
either side of the visual transition. For example, if a test
condition started with visual cues and ended without them,
perception begins by following the “with visual cues”
perceptual pattern and ends by following the “without visual
cues” perceptual pattern. Our data quantifies the time taken
to transition from one perceptual pattern to another. The
5
length of the transition period differs between the with to
without visual cues and without to with visual cues
conditions.
Figure 5 shows what subjects thought their motion was, on
average across one test condition in green (with to without
visual cues) and both control conditions in yellow and blue
(with visual cues the entire time and without visual cues the
entire time respectively). During the visual cues to no visual
cue transition shown in Figure 5, it takes around 30 seconds
for the control condition (green) to shift from tracking the
with visual cues (yellow) to the without visual cues (blue)
control conditions. This is longer than the transition period
(or “perceptual decay”) [25] we would expect had visual cues
never been present in the first place.
During the no visual cues to visual cues transitions, we see an
approximately 10 second delay in motion perception moving
from within the error bounds of the initial (no visual cues
condition) to the latter (visual cues) condition. Based on
angular velocities of 40-60 degrees per second across visual
transitions and one click every 90 degrees of perceived
rotation, we anticipated less than a 2.5 second delay as a
limitation of the psychophysical task. We note that 10
seconds is substantially longer than the 2.5 second delay we
might expect as a result of the psychophysical task.
Therefore, subject’s perception does not undergo a
discontinuous transition when accurate visual cues are
suddenly gained during a period of spatial disorientation.
Not all data that we collected is presented in this paper. We
also collected data on several, highly predicable motion
profiles. These motion profiles (one particular profile is
shown in Figure 7) did not result in spatial disorientation
during the no visual cues control condition. This means that
there was no difference between subject’s orientation
perception in the two control conditions. The test condition
could not, therefore, transition between the two. We defined
no substantial difference in perception to be where average
perception of one condition remained within the error bounds
of another condition. For example, in Figure 7, the yellow
line (with visual cues control condition average) stays within
the bounds of the light blue shaded area (error bounds of the
no visual cues control condition). If a motion profile produces
no differences in perception between conditions preceding
and following the visual transition, we cannot expect a
transition in orientation perception for that motion.
Therefore, data where the control condition averages were
within the other’s error bounds were not used for this
analysis.
Figure 6: Two motion profiles and corresponding human
subject data collected (10 and 6 subjects respectively).
Both top and bottom are plots of the suddenly gaining
visual cues scenario. The top plot clearly shows the 10
second delay between gaining visual cues and orientation
perception matching orientation perception where the
visual cues had been available the whole time (the green
line takes 10 seconds to match the yellow line).
5. DISCUSSION
Based on the data from the present study, perception
following a sudden loss of visual cues does not change
instantaneously; we see a more gradual convergence towards
true motion. Similarly, orientation perception does not decay
as fast as we would expect [20], [25], [34] when visual cues
have previously been present indicating that past visual
information may be used to inform subsequent spatial
orientation.
Existing models of orientation perceptions are not robust to
sudden changes in visual cues (as far as we are aware). Future
work will focus on integrating our results into existing
models of motion perception which do not currently
accurately predict orientation perception during sudden
visual transitions. Sensory-conflict-based models of
6
orientation perception [20], [25], [35], [36] may be updated
with the addition of low pass filtering in order to reconcile
the sudden changes in sensory information (large error terms
within the model). Dynamic reweighting of sensory cues may
also be necessary for accurate modelling of perception when
unexpected cues become available [37], [38].
Figure 7: One of the highly consistent motion profiles we
collected data for (9 subjects). There is no substantial
difference in perception between the two control or test
conditions. We, therefore, do not see (and neither do we
expect to see) a change in perception following a visual
cue availability transition. We believe that there is no
perceptual decay in the no visual cue control condition
because the motion profile was too predictable and
subjects were simply pressing the button at a consistent
rate.
A limitation of our work, however, is that there is substantial
variation between subjects when visual cues are not
presented. As shown in Figures Figure 5 and Figure 6, the
standard error tends to enlarge after visual cues are lost and
is larger when visual cues are never present, particularly for
consistent motion profiles. Further, we note that we have not
presented all of our experimental data: several of our motion
profiles appeared too predictable for our subjects.
Figure 7 shows a consistent motion profile where we would
expect to see signal decay when motion is perceived with
vestibular cues only. However, we note that our subjects were
able to fairly accurately continue to report a sensation of
motion. We hypothesize that unidirectional and consistent
motion profiles became predictable to subjects over the
course of their two-hour testing session. If a motion profile
was too consistent, we believe that subjects simply started
pressing the button (to indicate perception) at a constant rate
rather than at the rate they felt they were spinning. Notably,
during the highly variable and bidirectional motion profile
(see Figure 6, bottom), we do not see such predictive
behavior. Instead, we see evidence of velocity storage [39],
[40] towards the end of the profile. By the end of the
“multistep 3” motion profile (without visual cues) subjects
reported that they felt they were rotating in the opposite
direction from their true motion (shown in Figure 5 and
Figure 6). An additional limitation is the use of VR to deliver
visual cues. VR is an extremely powerful tool but is
associated with reduced vestibular ocular reflex (VOR) gain
for a given amount of rotational optical flow [41]. While VR
can deliver interpretable motion cues, altered VOR responses
suggest the VR cues may be processed by the brain
differently than naturalistic visual cues.
We have measured orientation perception during a transition
in the availability of visual cues, which occurs regularly in
flight, but that has not been previously studied. We have
shown that the sudden gain of visual cues during a period of
misperception of rotation in the dark does not lead to
immediate correction of orientation perception. Similarly, we
have shown that suddenly losing visual cues takes many
seconds before rotation perception to that when visual cues
are unavailable throughout.
Our results are applicable to both space and aircraft piloting
tasks. Flying into and out of clouds is a common occurrence
in aviation and is associated with spatial disorientation.
Further, future space missions to other celestial bodies (such
as the Moon or Mars) will likely require human pilots who
may lose visual references from dust blowback as they are
landing. During flight (both air and space), transitions in the
presence of visual cues may also occur if the pilot is not
looking through the canopy and is instead directing their gaze
towards distraction inside the cockpit. Our results will allow
for more robust design of flight operations or spatial
disorientation mitigation procedures. This is because we are
now aware of how long it takes the pilot’s perception to
change following restoration or sudden loss of visual cues.
6. CONTRIBUTIONS
We are the first to quantify motion perception in humans
during a sudden transition in the availability of visual cues.
By using motion profiles that result in spatial disorientation,
we have captured motion perception in both sudden gain of
visual cues and sudden loss of visual cues. Our dataset
indicates that there may be some processing of visual
information when visual cues are suddenly gained because
the perception changes gradually, even after accounting for
the physical limitations of the psychophysical task.
Perception during the opposite transition (suddenly losing
visual cues) slowly deviates from reality, indicating that there
may be sustained influence of recent visual information, even
after it is no longer present.
ACKNOWLEDGEMENTS
This work is supported by the Office of Naval Research under
a Multi University Research Initiative. PI: Daniel Merfeld.
REFERENCES
[1] A. J. Benson, Spatial disorientationgeneral aspects
J. Ernsting, A.N. Nicholson, D.J. Rainford (Eds.),
Aviation Medicine, Butterworth. Oxford, England,
UK, 1978.
7
[2] D. M. Merfeld, Vestibular Sensation. Sensation and
Perception. Sinauer Associates, Inc, M. A.
Sunderland, 2017.
[3] F. E. Guedry, “Psychophysics of Vestibular
Sensation,” in Vestibular System Part 2:
Psychophysics, Applied Aspects and General
Interpretations, vol. 6 / 2, H. H. Kornhuber, Ed. Berlin,
Heidelberg: Springer Berlin Heidelberg, 1974, pp. 3
154. doi: 10.1007/978-3-642-65920-1_1.
[4] T. N. Clark, “Systematizing Global and Regional
Creativity,” in Handbook of Science and Technology
Convergence, W. S. Bainbridge and M. C. Roco, Eds.
Cham: Springer International Publishing, 2015, pp. 1
13. doi: 10.1007/978-3-319-04033-2_77-1.
[5] A. Tribukait, A. Ström, E. Bergsten, and O. Eiken,
“Vestibular Stimulus and Perceived Roll Tilt During
Coordinated Turns in Aircraft and Gondola
Centrifuge,” Aerosp. Med. Hum. Perform., vol. 87, no.
5, pp. 454463, May 2016, doi:
10.3357/AMHP.4491.2016.
[6] T. K. Clark, “Effects of Spaceflight on the Vestibular
System,” in Handbook of Space Pharmaceuticals, Y.
Pathak, M. Araújo dos Santos, and L. Zea, Eds. Cham:
Springer International Publishing, 2019, pp. 139. doi:
10.1007/978-3-319-50909-9_2-1.
[7] G. Clément, P. Denise, M. F. Reschke, and S. J. Wood,
“Human ocular counter-rolling and roll tilt perception
during off-vertical axis rotation after spaceflight,” J.
Vestib. Res. Equilib. Orientat., vol. 17, no. 56, pp.
209215, 2007.
[8] K. N. de Winkel, G. Clément, E. L. Groen, and P. J.
Werkhoven, “The perception of verticality in lunar and
Martian gravity conditions,” Neurosci. Lett., vol. 529,
no. 1, pp. 711, Oct. 2012, doi:
10.1016/j.neulet.2012.09.026.
[9] C. Oman, Spatial Orientation and Navigation in
Microgravity. Springer, Boston, MA, 2007.
[10] A. Bellenkes, R. Bason, and D. W. Yacavone, “Spatial
disorientation in naval aviation mishaps: a review of
class A incidents from 1980 through 1989,” Aviat.
Space Environ. Med., vol. 63, no. 2, pp. 128131, Feb.
1992.
[11] L. R. Young, K. H. Sienko, L. E. Lyne, H. Hecht, and
A. Natapoff, “Adaptation of the vestibulo-ocular
reflex, subjective tilt, and motion sickness to head
movements during short-radius centrifugation,” J.
Vestib. Res. Equilib. Orientat., vol. 13, no. 23, pp. 65
77, 2003.
[12] NASA, “Human Vestibular System in Space.” 2004.
Accessed: Nov. 22, 2021. [Online]. Available:
https://www.nasa.gov/audience/forstudents/9-
12/features/F_Human_Vestibular_System_in_Space.h
tml
[13] M. Braithwaite, S. Groh, and E. Alvarez, “Spatial
Disorientation in U.S. Army Helicopter Accidents: An
Update of the 1987-92 Survey to Include 1993-95.”
DTIC, 1997. [Online]. Available:
https://apps.dtic.mil/sti/citations/ADA323898
[14] B. Cheung, K. Money, H. Wright, and W. Bateman,
“Spatial disorientation-implicated accidents in
Canadian forces, 1982-92,” Aviat. Space Environ.
Med., vol. 66, no. 6, pp. 579585, Jun. 1995.
[15] M. Cohen, “Disorienting effects of aircraft catapult
launchings: III. Cockpit displays and piloting
performance,” Aviat Space Env. Med, vol. 48, no. 9,
pp. 797804, Sep. 1977.
[16] J. B. Dixon, C. A. Etgen, D. S. Horning, T. K. Clark,
and R. V. Folga, “Integration of a Vestibular Model for
the Disorientation Research Device Motion Algorithm
Application,” Aerosp. Med. Hum. Perform., vol. 90,
no. 10, pp. 901907, Oct. 2019, doi:
10.3357/AMHP.5416.2019.
[17] M. J. Dai, I. S. Curthoys, and G. M. Halmagyi, “A
model of otolith stimulation,” Biol. Cybern., vol. 60,
no. 3, Jan. 1989, doi: 10.1007/BF00207286.
[18] C. R. Fetsch, A. Pouget, G. C. DeAngelis, and D. E.
Angelaki, “Neural correlates of reliability-based cue
weighting during multisensory integration,” Nat.
Neurosci., vol. 15, no. 1, pp. 146154, Jan. 2012, doi:
10.1038/nn.2983.
[19] F. Karmali, K. Lim, and D. M. Merfeld, “Visual and
vestibular perceptual thresholds each demonstrate
better precision at specific frequencies and also exhibit
optimal integration,” J. Neurophysiol., vol. 111, no. 12,
pp. 23932403, Jun. 2014, doi:
10.1152/jn.00332.2013.
[20] M. C. Newman, “A multisensory observer model for
human spatial orientation perception,” Massachusetts
Institute of Technology, Cambridge, MA, 2009.
[Online]. Available:
http://hdl.handle.net/1721.1/51636
[21] K. E. Cullen, “The neural encoding of self-motion,
Curr. Opin. Neurobiol., vol. 21, no. 4, pp. 587595,
Aug. 2011, doi: 10.1016/j.conb.2011.05.022.
[22] J. Borah and L. Young, “Spatial Orientation and
Motion Cue Environment Study in the Total In-Flight
Simulator.” DTIC, 1983. [Online]. Available:
https://apps.dtic.mil/sti/citations/ADA129391
[23] J. Borah, L. Young, and R. E. Curry, Sensory
Mechanism Modeling. Air Force Human Resources
Laboratory, Air Force Systems Command, 1977.
[24] D. Merfeld, “Rotation otolith tilt-translation
reinterpretation (ROTTR) hypothesis: a new
hypothesis to explain neurovestibular spaceflight
adaptation,” J Vestib Res, vol. 13, no. 46, pp. 30920,
2003.
[25] D. Merfeld, L. Young, C. Oman, and M. Shelhamer,
“A multidimensional model of the effect of gravity on
the spatial orientation of the monkey.,” J. Vestib. Res.
Equilib. Orientat., vol. 3, no. 2, pp. 14161, 1993.
[26] L. H. Zupan and D. M. Merfeld, “Neural Processing of
Gravito-Inertial Cues in Humans. IV. Influence of
Visual Rotational Cues During Roll Optokinetic
Stimuli,” J. Neurophysiol., vol. 89, no. 1, pp. 390400,
Jan. 2003, doi: 10.1152/jn.00513.2001.
[27] C. R. Fetsch, G. C. DeAngelis, and D. E. Angelaki,
“Visual-vestibular cue integration for heading
perception: applications of optimal cue integration
8
theory: Mechanisms of visual-vestibular cue
integration,” Eur. J. Neurosci., vol. 31, no. 10, pp.
17211729, May 2010, doi: 10.1111/j.1460-
9568.2010.07207.x.
[28] J. Laurens and D. E. Angelaki, “How the
Vestibulocerebellum Builds an Internal Model of Self-
motion,” in The Neuronal Codes of the Cerebellum,
Elsevier, 2016, pp. 97115. doi: 10.1016/B978-0-12-
801386-1.00004-6.
[29] C. Oman, “A heuristic mathematical model for
dynamics of sensory conflict and motion sickness,”
Acta Otolaryngol Suppl, vol. 392, pp. 144, 1982.
[30] J. J. Groen and L. B. W. Jongkees, “The turning test
with small regulable stimuli; the cupulogram obtained
by subjective angle estimation,” J. Laryngol. Otol.,
vol. 62, no. 4, pp. 236240, Apr. 1948, doi:
10.1017/s0022215100008926.
[31] F. E. Guedry and L. S. Lauver, “Vestibular reactions
during prolonged constant angular acceleration,” J.
Appl. Physiol., vol. 16, no. 2, pp. 215220, Mar. 1961,
doi: 10.1152/jappl.1961.16.2.215.
[32] M. C. Bermúdez Rey, T. K. Clark, and D. M. Merfeld,
“Balance Screening of Vestibular Function in Subjects
Aged 4 Years and Older: A Living Laboratory
Experience,” Front. Neurol., vol. 8, p. 631, Nov. 2017,
doi: 10.3389/fneur.2017.00631.
[33] M. C. Bermúdez Rey, T. K. Clark, W. Wang, T.
Leeder, Y. Bian, and D. M. Merfeld, Vestibular
Perceptual Thresholds Increase above the Age of 40,”
Front. Neurol., vol. 7, Oct. 2016, doi:
10.3389/fneur.2016.00162.
[34] J. Laurens, D. Straumann, and B. J. M. Hess,
“Processing of Angular Motion and Gravity
Information Through an Internal Model,” J.
Neurophysiol., vol. 104, no. 3, pp. 13701381, Sep.
2010, doi: 10.1152/jn.00143.2010.
[35] T. K. Clark, M. C. Newman, F. Karmali, C. M. Oman,
and D. M. Merfeld, “Mathematical models for
dynamic, multisensory spatial orientation perception,”
in Progress in Brain Research, vol. 248, Elsevier,
2019, pp. 6590. doi: 10.1016/bs.pbr.2019.04.014.
[36] H. P. Williams, J. L. Voros, D. M. Merfeld, and T. K.
Clark, “Extending the Observer Model for Human
Orientation Perception to Include In-Flight Perceptual
Thresholds,” NAVAL MEDICAL RESEARCH UNIT
DAYTON, 2021. [Online]. Available:
https://apps.dtic.mil/sti/pdfs/AD1124219.pdf
[37] J. X. Brooks and K. E. Cullen, “The Primate
Cerebellum Selectively Encodes Unexpected Self-
Motion,” Curr. Biol., vol. 23, no. 11, pp. 947955, Jun.
2013, doi: 10.1016/j.cub.2013.04.029.
[38] C. R. Fetsch, A. H. Turner, G. C. DeAngelis, and D. E.
Angelaki, “Dynamic Reweighting of Visual and
Vestibular Cues during Self-Motion Perception,” J.
Neurosci., vol. 29, no. 49, pp. 1560115612, Dec.
2009, doi: 10.1523/JNEUROSCI.2574-09.2009.
[39] J. Laurens and D. E. Angelaki, “The functional
significance of velocity storage and its dependence on
gravity,” Exp. Brain Res., vol. 210, no. 34, pp. 407
422, May 2011, doi: 10.1007/s00221-011-2568-4.
[40] Th. Raphan, V. Matsuo, and B. Cohen, “Velocity
storage in the vestibulo-ocular reflex arc (VOR),” Exp.
Brain Res., vol. 35, no. 2, Apr. 1979, doi:
10.1007/BF00236613.
[41] Stefano Di Girolamo, Pasqualina Pic, “Vestibulo-
Ocular Reflex Modification after Virtual Environment
Exposure,” Acta Otolaryngol. (Stockh.), vol. 121, no.
2, pp. 211215, Jan. 2001, doi:
10.1080/000164801300043541.
BIOGRAPHY
Jamie L Voros is affiliated with the Ann
& H.J. Smead Department of Aerospace
Engineering Sciences, University of
Colorado Boulder. Highest degrees
obtained: M. S. Aerospace Engineering
Sciences, 2020, University of Colorado
Boulder. M. S. Computer Science, 2022,
University of Colorado Boulder. Prior to the University of
Colorado Boulder, Jamie completed her B. S. at the
Massachusetts Institute of Technology.
Torin K Clark is currently affiliated with
the Ann & H.J. Smead Department of
Aerospace Engineering Sciences,
University of Colorado Boulder. Highest
degree obtained: Ph. D. Humans in
Aerospace, 2013, Massachusetts Institute
of Technology. Prior to the
Massachusetts Institute of Technology,
Torin completed his B. S. at the University of Colorado
Boulder.
... Subjects held the triggers on the back of the controllers if they felt they were not moving. The task of indicating rotation every 90 degrees has been employed and validated previously (Groen and Jongkees, 1948;Guedry and Lauver, 1961;Voros and Clark, 2023). ...
Article
Full-text available
Introduction Vestibular and visual information is used in determining spatial orientation. Existing computational models of orientation perception focus on the integration of visual and vestibular orientation information when both are available. It is well-known, and computational models capture, differences in spatial orientation perception with visual information or without (i.e., in the dark). For example, during earth vertical yaw rotation at constant angular velocity without visual information, humans perceive their rate of rotation to decay. However, during the same sustained rotation with visual information, humans can continue to more accurately perceive self-rotation. Prior to this study, there was no existing literature on human motion perception where visual information suddenly become available or unavailable during self-motion. Methods Via a well verified psychophysical task, we obtained perceptual reports of self-rotation during various profiles of Earth-vertical yaw rotation. The task involved transitions in the availability of visual information (and control conditions with visual information available throughout the motion or unavailable throughout). Results We found that when visual orientation information suddenly became available, subjects gradually integrated the new visual information over ~10 seconds. In the opposite scenario (visual information suddenly removed), past visual information continued to impact subject perception of self-rotation for ~30 seconds. We present a novel computational model of orientation perception that is consistent with the experimental results presented in this study. Discussion The gradual integration of sudden loss or gain of visual information is achieved via low pass filtering in the visual angular velocity sensory conflict pathway. In conclusion, humans gradually integrate sudden gain or loss of visual information into their existing perception of self-motion.
Article
Full-text available
To better understand the various individual factors that contribute to balance and the relation to fall risk, we performed the modified Romberg Test of Standing Balance on Firm and Compliant Support, with 1,174 participants between 4 and 83 years of age. This research was conducted in the Living Laboratory® at the Museum of Science, Boston. We specifically focus on balance test condition 4, in which individuals stand on memory foam with eyes closed, and must rely on their vestibular system; therefore, performance in this balance test condition provides a proxy for vestibular function. We looked for balance variations associated with sex, race/ethnicity, health factors, and age. We found that balance test performance was stable between 10 and 39 years of age, with a slight increase in the failure rate for participants 4–9 years of age, suggesting a period of balance development in younger children. For participants 40 years and older, the balance test failure rate increased progressively with age. Diabetes and obesity are the two main health factors we found associated with poor balance, with test condition 4 failure rates of 57 and 19%, respectively. An increase in the odds of having fallen in the last year was associated with a decrease in the time to failure; once individuals dropped below a time to failure of 10 s, there was a significant 5.5-fold increase in the odds of having fallen in the last 12 months. These data alert us to screen for poor vestibular function in individuals 40 years and older or suffering from diabetes, in order to undertake the necessary diagnostic and rehabilitation measures, with a focus on reducing the morbidity and mortality of falls.
Article
Full-text available
We measured vestibular perceptual thresholds in 105 healthy humans (54F/51M) ranging from 18 to 80 years of age. Direction-recognition thresholds were measured using standard methods. The motion consisted of single cycles of sinusoidal acceleration at 0.2 Hz for roll tilt and 1.0 Hz for yaw rotation about an earth-vertical axis, inter-aural earth-horizontal translation (y-translation), inferior–superior earth-vertical translation (z-translation), and roll tilt. A large subset of this population (99 of 105) also performed a modified Romberg test of standing balance. Despite the relatively large population (54F/51M), we found no difference between thresholds of male and female subjects. After pooling across sex, we found that thresholds increased above the age of 40 for all five motion directions investigated. The data were best modeled by a two-segment age model that yielded a constant baseline below an age cutoff of about 40 and a threshold increase above the age cutoff. For all subjects who passed all conditions of the balance test, the baseline thresholds were 0.97°/s for yaw rotation, 0.66°/s for 1-Hz roll tilt, 0.35°/s for 0.2-Hz roll tilt, 0.58 cm/s for y-translation, and 1.24 cm/s for z-translation. As a percentage of the baseline, the fitted slopes (indicating the threshold increase each decade above the age cutoff) were 83% for z-translation, 56% for 1-Hz roll tilt, 46% for y-translation, 32% for 0.2-Hz roll tilt, and 15% for yaw rotation. Even taking age and other factors into consideration, we found a significant correlation of balance test failures with increasing roll-tilt thresholds.
Article
INTRODUCTION: Spatial disorientation (SD) remains a leading cause of Class A mishaps and fatalities in aviation. Motion-based flight simulators and other research devices provide the capacity to rigorously study SD in order to develop effective countermeasures. By applying mathematical models of human orientation perception, we propose an approach to improve control algorithms for motion-based flight simulators to study SD.METHODS: The Disorientation Research Device (DRD), or the Kraken™, is the Department of Defense's newest and most capable aerospace medicine motion-based research device. We implemented an "Observer" model for predicting aircrew spatial orientation perception within the DRD, and perceptions experienced in flight. Further, we propose a framework that uses the model output, in addition to pilot control inputs, to optimize multiaxis motion control including human-in-the-loop control capability.RESULTS: A case study was performed to demonstrate the functionality of the framework. Additionally, the case study highlights both how limitations of human perception are crucial to consider when designing motion algorithms, and the challenges of effective flight simulation with multiple motion axes.DISCUSSION: We implemented a mathematical model for spatial orientation perception to improve the design of control algorithms for motion-based flight simulators, using the DRD as an example application. We provide an example of predicting perceptions, producing quantitative information on the efficacy of motion control algorithms. This mathematical model based approach to validating motion control algorithms aims to improve the fidelity of ground-based SD research.Dixon JB, Etgan CA, Horning DS, Clark TK, Folga RV. Integration of a vestibular model for the Disorientation Research Device motion algorithm application. Aerosp Med Hum Perform. 2019; 90(10):901-907.
Chapter
Mathematical models have been proposed for how the brain interprets sensory information to produce estimates of self-orientation and self-motion. This process, spatial orientation perception, requires dynamically integrating multiple sensory modalities, including visual, vestibular, and somatosensory cues. Here, we review the progress in mathematical modeling of spatial orientation perception, focusing on dynamic multisensory models, and the experimental paradigms in which they have been validated. These models are primarily “black box” or “as if” models for how the brain processes spatial orientation cues. Yet, they have been effective scientifically, in making quantitative hypotheses that can be empirically assessed, and operationally, in investigating aircraft pilot disorientation, for example. The primary family of models considered, the observer model, implements estimation theory approaches, hypothesizing that internal models (i.e., neural systems replicating the behavior/dynamics of physical systems)are used to produce expected sensory measurements. Expected signals are then compared to actual sensory afference, yielding sensory conflict, which is weighted to drive central perceptions of gravity, angular velocity, and translation. This approach effectively predicts a wide range of experimental scenarios using a small set of fixed free parameters. We conclude with limitations and applications of existing mathematical models and important areas of future work.
Article
Six human subjects received stimuli of 2 deg/sec. ² for 45 seconds and 1.5 deg/sec. ² for 60 seconds. Direct-coupled amplification of corneoretinal potential was used to record eye movements. Although some subjects occasionally showed a rise and decline in the velocity of nystagmus during constant angular acceleration, typically, near-maximum velocity was attained in about 30 seconds with little subsequent gain or loss until acceleration ended. Routinely, nystagmus outlasted the subjective afterreaction. Departures from previous results seem attributable to maintenance of alertness by requiring continuous estimation of subjective events. Theoretical implications of the divergence between the subjective and oculomotor aspects of the reaction are discussed. Submitted on August 19, 1960
Chapter
Creativity is treasured from the natural sciences through the arts. But contexts transform how creativity works. This chapter explores links between creativity and economic development, creative cities, and civic engagement of citizens. It illustrates a framework for analysis which joins two past traditions. Democratic participation ideas come mostly from Alexis de Tocqueville, while innovation/Bohemian ideas driving the economy are largely inspired by Joseph Schumpeter and Jane Jacobs. New developments building on these core ideas are in the first two sections. Reconsideration of each tradition leads to partial integration of the two: participation joins innovation. This is the main theme on the third section; the buzz around arts and culture organizations can be critical to drive the new democratic politics and cutting edge economies. Buzz enters as a new resource, with new rules of the game. It does not dominate; it parallels other activities which continue. The fourth section shows how these patterns vary across distinct scenes, specifying 15 dimensions of scenes measured in 1000s of zip codes and other small areas from Korea to the USA to Spain.
Chapter
The vestibulocerebellum transforms raw vestibular signals into central estimates of head motion. One well-studied transformation is the extraction of head tilt and translation from vestibular signals, which requires integrating signals from two different vestibular sensors: the semicircular canals and the otoliths. Here we review how tilt and translation are encoded in the vestibulocerebellum. We focus on computational aspects: we present the theoretical framework of tilt–translation discrimination, as well as the mathematical methods used to model how cerebellar neurons encode tilt and translation. We also present the experimental and statistical frameworks that allow identifying which variables are encoded by individual cerebellar neurons and probing into the underlying multisensory integration operations.
Chapter
Creativity is treasured from the natural sciences through the arts. But contexts transform how creativity works. This chapter explores links between creativity and economic development, creative cities, and civic engagement of citizens. It illustrates a framework for analysis which joins two past traditions. Democratic participation ideas come mostly from Alexis de Tocqueville, while innovation/Bohemian ideas driving the economy are largely inspired by Joseph Schumpeter and Jane Jacobs. New developments building on these core ideas are in the first two sections. Reconsideration of each tradition leads to partial integration of the two: participation joins innovation. This is the main theme on the third section; the buzz around arts and culture organizations can be critical to drive the new democratic politics and cutting edge economies. Buzz enters as a new resource, with new rules of the game. It does not dominate; it parallels other activities which continue. The fourth section shows how these patterns vary across distinct scenes, specifying 15 dimensions of scenes measured in 1000s of zip codes and other small areas from Korea to the USA to Spain.
Article
BACKGROUND: One disorienting movement pattern, common during flight, is the entering of a coordinated turn. While the otoliths persistently sense upright head position, the change in roll attitude constitutes a semicircular canal stimulus. This sensory conflict also arises during acceleration in a swing-out gondola centrifuge. From a vestibular viewpoint there are, however, certain differences between the two stimulus situations; the aim of the present study was to elucidate whether these differences are reflected in the perceived roll attitude. METHODS: Eight nonpilots were tested in a centrifuge (four runs) and during flight (two turns). The subjective visual horizontal (SVH) was measured using an adjustable luminous line in darkness. The centrifuge was accelerated from stationary to 1.56 G (roll 50°) within 7 s; the duration of the G plateau was 5 min. With the aircraft, turns with approximately 1.4 G (45°) were entered within 15 s and lasted for 5 min. Tilt perception (TP) was defined as the ratio of SVH/real roll tilt; initial and final values were calculated for each centrifugation/turn. RESULTS: In both systems there was a sensation of tilt that declined with time. The initial TP was (mean ± SD): 0.40 ± 0.27 (centrifuge) and 0.37 ± 0.30 (flight). The final TP was 0.20 ± 0.26 and 0.17 ± 0.19, respectively. Both initial and final TP correlated between the two conditions. CONCLUSION: The physical roll tilt is under-estimated to a similar degree in the centrifuge and aircraft. Also the correspondence at the individual level suggests that the vestibular dilemma of coordinated flight can be recreated in a lifelike manner using a gondola centrifuge. Tribukait A, Ström A, Bergsten E, Eiken O. Vestibular stimulus and perceived roll tilt during coordinated turns in aircraft and gondola centrifuge. Aerosp Med Hum Perform. 2016; 87(5):454–463.
Article
Quantitative "observer" models for spatial orientation and eye movements have been developed based on 1-G data from humans and animals (e.g. Oman 1982, 1991, Merfeld, et al 1993, 2002; Haslwanter 2000, Vingerhoets 2006). These models assume that the CNS estimates "down", head angular velocity and linear acceleration utilizing an internal model for gravity and sense organ dynamics, continuously updated by sensory-conflict signals. CNS function is thus analogous to a Luenberger state observer in engineering systems. Using a relatively small set of free parameters, Observer orientation models capture the main features of experimental data for a variety of different motion stimuli. We developed a Matlab/Simulink based Observer model, including Excel spreadsheet input capability and a GUI to make the model accessible to less expert Matlab users. Orientation and motion predictions can be plotted in 2D or visualized in 3D using virtual avatars. Our Observer's internal model now computes azimuth, and pseudointegrates linear motion in an allocentric reference frame (perceived north-east-down). The model mimics the large perceptual errors for vertical motion observed experimentally. It retains the well validated "vestibular core" of the Merfeld perceptual model and predicts responses to angular velocity and linear accelerations steps, dumping, fixed radius centrifugation, roll tilt and OVAR. This model was further extended to include static and dynamic visual sensory information from four independent visual sensors (Visual Velocity, Position, Angular Velocity and Gravity/"Down").