Content uploaded by Umberto Saetti
Author content
All content in this area was uploaded by Umberto Saetti on May 07, 2024
Content may be subject to copyright.
Full-Body Haptic and Spatial Audio Cueing Algorithms for Augmented Pilot
Perception
Michael T. Morcos
Graduate Research Assistant
Spencer M. Fishman
Graduate Research Assistant
Umberto Saetti
Assistant Professor
Department of Aerospace Engineering
University of Maryland
College Park, MD 20740
Edward N. Bachelder
Senior Research Engineer
San Jose State University
Moffett Field, CA 94035
Martine Godfroy-Cooper
Independent Scholar
ABSTRACT
This paper illustrates the development, implementation, and testing of full-body haptic and spatial audio cueing al-
gorithms for augmented pilot perception. Cueing algorithms are developed for roll-axis compensatory tracking tasks
where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to pro-
duce a control action. The error is displayed to the pilot using multiple cueing modalities: visual, haptic, audio, and
combinations of these. For the visual and combined visual haptic/audio modalities, visual cues are also considered in
degraded visual environments (DVE). Full-body haptic and spatial audio algorithms that are based on a proportional-
derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle system
(PVS) performance for the task in consideration in absence of visual cueing, and to improve PVS performance in
DVE when used in combination with visual feedback. These results are consistent with previous studies on the use of
secondary perceptual cues for augmentation of human perception. The combination of these results indicate that the
use of secondary sensory cues such as full-body haptics and spatial audio to augment the pilot perception can lead to
improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied.
INTRODUCTION
In symbiotic piloting, a human pilot shares perception and
control authority with an artificial intelligence (AI) agent to
control the motion of a vehicle, acting as a symbiotic organ-
ism (Fig. 1). Here, a vehicle is intended as any generic n-
Degrees Of Freedom (DOF) machine (e.g., airplanes and he-
licopters, drones, etc.) that can move across regions of phys-
ical space. The vehicle may physically host or be teleoper-
ated by a human pilot, and its dynamics are augmented by an
AI agent with human-like perception and actuation dynam-
ics. Current perception models for symbiotic human-machine
piloting of vehicles are based on the dominant visual (i.e.,
sight) and vestibular (i.e., equilibrium) cues, but neglect less
dominant cues, such as somatosensory (e.g., full-body hap-
tics) or auditory (e.g., 3D audio), and interactions between
primary and secondary sensory channels. As such, there is
limited understanding on shared human-machine perception
solutions for vehicle piloting when the symbiotic system oper-
ates with denied/impaired sensory channels (e.g., degraded vi-
sual environment), denied/impaired interactions between sen-
Presented at the VFS International 80th Annual Forum &
Technology Display, Montreal, Qubec, Canada, May 7–9, 2024.
Copyright © 2024 by the Vertical Flight Society. All rights reserved.
Distribution Statement A. Approved for public release; distribution
is unlimited.
sory channels, or with primary sensory channels augmented
with traditionally secondary perception cues.
Denial or impairment of perceptual channels may arise from
environment conditions (e.g., weather), factors limiting ma-
chine abilities (e.g., sensor failure), or human pilot abilities
(e.g., temporary or permanent disabilities, e.g., fatigue or
workload). Within the context of manned piloting of aircraft,
modeling of secondary sensory cues could help develop cue-
ing strategies to enable emergent Pilot-Vehicle System (PVS)
performance, or to supplement for pilot spatial disorientation
which may be caused by false perception of owns orienta-
tion and relative movement, leading to loss of control and
Controlled Flight Into Terrain (CFIT). This is motivated by
principles of cognitive load theory, suggesting that the appro-
priate distribution of sensory information between different
channels can facilitate the skill level in performing a partic-
ular task (Ref. 1). Furthermore, information redundancy, in
particular when the information in the most precise modal-
ity, typically vision, is degraded, can lead to multisensory en-
hancement where the output of the combined sensory modali-
ties, for example visual plus auditory, exceeds that of the most
reliable input according to the principle of inverse effective-
ness (Refs. 2, 3).
A significant body of work exists on the use of multimodal
cueing for enhanced Situational Awareness (SA) (Refs. 4–10),
1
Figure 1: Symbiotic pilot-vehicle system (PVS).
pilot training (Refs. 11–19), and flight envelope protection
(Refs. 20–24). Concerning pilot training, the use of haptic
feedback (in the form of a force-feel pilot stick) as a tool for
skill training has shown benefits for the formation of motor-
memory and support for certain temporal tasks. However,
none of these studies investigated (i) full-body haptics, or (ii)
the use of haptics and spatialized (3D) audio cues for en-
hanced pilot-vehicle performance, handling qualities, or for
pilot modeling. Within the context of somatosensory cueing
as an assistive aid, previous studies have shown that cueing
the relative motion of a hovering helicopter with respect to
a target fixed in space through full-body tactile cueing is an
effective strategy to increase SA and handling qualities, and
decrease pilot workload in and out of Degraded Visual Envi-
ronment (DVE). In these studies, the relative linear and angu-
lar positions, velocities, and accelerations were cued to the pi-
lot through an array of tactors, and using tactor pulse patterns
with varying amplitude, frequency, and waveform to convey
the relevant information (Refs. 25–28).
Recently, full-body haptics in the form of Electrical Muscu-
lar Stimulation (EMS) were investigated for augmented pilot
perception in degraded/denied visual environments (Ref. 29).
Contrary to previous studies, these haptic cueing algorithms
are based on a continuous cueing strategy rather than on a
binning strategy. In a subsequent study, spatial audio cueing
algorithms were also investigated for augmented pilot percep-
tion (Ref. 30). The findings of these two studies were con-
sistent in that the use of both full-body haptics and spatial
audio cueing led to improved/partially-restored PVS perfor-
mance when primary sensory cues like vision are impaired or
denied. These results are consistent with the predictions made
by the multisensory response enhancement and inverse effec-
tiveness principles in the framework of Bayesian models of
multisensory integration (Refs. 31, 32).
The current paper summarizes and builds upon (Refs. 29,30),
which investigated haptic and spatial audio cueing separately,
to investigate the use of combined haptic and spatial audio
cueing for augmented pilot perception. Specific objective of
this paper involve: (i) extending the crossover model (Ref. 33)
to full-body haptic and spatial audio cueing, (ii) identifying
the human pilot and PVS dynamics for these secondary sen-
sory channels, and (iii) comparing the combined haptic and
audio cueing modality to individual haptic and audio cue-
ing modalities and assessing any potential multimodal perfor-
mance enhancement.
The paper begins with a discussion of the overall methodology
adopted in this investigation, including an overview of pilot
modeling. This is followed by a description of the equipment
adopted for the experimental studies. Next, the development
and implementation of full-body haptic and spatial audio cue-
ing strategies that make use of proportional-derivative com-
pensation are described in detail. Discussions on experimen-
tal design and parametric identification from input-output data
follows. Results feature both time- and frequency-domain
analyses of the experimental data. Quantitative data from the
experiments is compared to qualitative data from pilot feed-
back. Final remarks summarize the overall findings of the
study and future developments are identified.
METHODOLOGY
Overview
The present effort focuses on closed-loop compensatory
tracking tasks in which the pilot acts on the displayed error
ebetween a desired command input iand the comparable ve-
hicle output motion mto produce a control action c. Histori-
cally, information about the error ewas presented to the pilot
through visual displays, examples of which is shown in Fig.
2 (Ref. 34). In Fig. 2a, the green bar with the inner and outer
reticles is aircraft pitch attitude indicator m, whereas the or-
ange bar represents the commanded attitude i. This kind of
display is used for pursuit tracking tasks (Ref. 35). The dif-
ference between he desired command input and the vehicle
2
output motion is the error that the pilot attempts to minimize
within the defined desired and adequate performance con-
straints. These constraints are given by the inner and outer ret-
icles, respectively. Figure 2b shows another kind of visual dis-
play used for compensatory tracking tasks referred to as bow
tie display. This particular display is used to cue roll and pitch
attitude errors simultaneously. For pitch-axis evaluations with
this display, the objective is to capture and hold the green dot
within the circular bonds for each commanded pitch attitude.
For roll-axis evaluations with the same display, the objective
is to capture and hold the green line within the bow tie bounds
for each commanded bank angle. The idea behind this re-
search is to replace and/or augment these visual displays with
full-body haptic and/or spatial audio displays. More specif-
ically, spatial audio displays simulate the binaural signals in
various acoustic environments and then create desired spatial
auditory events via headphone or loudspeaker presentations.
In the present experiment, headphones have been used to dis-
play the spatial auditory information with non-individualized
Head-Related Transfer Functions (HRTFs) provided by the
slab3D software (Ref. 36).
(a) Pitch axis pursuit tracking display.
(b) Bow tie display for roll and pitch compensatory tracking tasks.
Figure 2: Visual displays.
Pilot Vehicle System Modeling
As the base for the prediction of PVS performance and iden-
tification of the PVS dynamics, a pilot model structure needs
to be specified. Due to the preliminary nature of the investiga-
tion, a crossover model (Ref. 37) is assumed as it constitutes
a simple yet powerful way to represent the combined PVS dy-
namics. The PVS dynamics in a compensatory tracking task
is shown qualitatively in Fig. 3.
In this study, the vehicle and display dynamics are all com-
bined into the controlled element with a transfer function
Yc(s). The portion of the pilot’s control action linearly cor-
related with the input is represented by the quasi-linear de-
scribing function Yp(s), which also includes the effects of the
manipulator (human arm) characteristics. In a compensatory
control task, it was shown through extensive research (see,
e.g., (Ref. 33) that the human pilot adapts his or her dynamics
so that near crossover frequency ωc(i.e., where |YpYc|=1, or
0 dB) the open-loop dynamics are given by:
YOL(s) = YpYc=ωc
se−sτe(1)
where τeis the effective time delay, including transport delays
and neuromuscular lags. It is worth noting that the crossover
frequency is equivalent to the loop gain and accounts for the
pilot’s adaptive compensation for the controlled element gain.
The aircraft dynamics is assumed to have the following gen-
eral form:
Yc(s) = Ke
s(TIs+1)(2)
and is representative of the roll attitude response to the pilot
lateral stick. Note that any delays in the aircraft dynamics are
omitted here as the aircraft response is assumed as idea. The
simplest pilot describing function form for this particular air-
craft dynamics, which corresponds to the open-loop crossover
model, is:
Yp(s) = Kp(TLs+1)e−sτe(3)
where Kpis the pilot static gain and TLis the pilot lead time
constant. It is worth mentioning that (Ref. 33) suggests the
pilot lead time to be approximately equal to the lag time con-
stant τIof the aircraft dynamics. This way, the pilot can ad-
just his or her gain Kpto place the crossover frequency where
required to complete the task. Thus, given known aircraft dy-
namics, the human pilot parameters to be identified for each
cueing modality are Kpand τe. Moreover, particular focus is
be placed on estimating and comparing ωcfor each cueing
modality.
If a sensory modality and/or vehicle configuration yields a
high crossover frequency, i.e.,ωc>4 rad/s (high perfor-
mance) the pilot’s neuromuscular mode will start to influence
the open-loop response such that the latter can become much
flatter than what the crossover model predicts. In this case, the
modified pilot structural model proposed in (Ref. 38) should
be used, which accounts for the neuromuscular dynamics.
EQUIPMENT
Full-Body Haptics
Full-body haptic feedback is achieved through full-body hap-
tic suits (TESLASUITs), shown in Fig. 4a. These suits pro-
vide number of functions: (i) full-body haptics via local-
ized electrical muscle stimulation (EMS) achieved through
3
Figure 3: Pilot vehicle system in a compensatory tracking task.
80 EMS channels and 114 electrodes that can operate at fre-
quencies of 1-150 Hz and current of 1-60 mA, (ii) biometry
through heart rate (HR), arterial oxygen saturation (SpO2),
and pulse rate variability (PRV) monitoring, and (iii) full-
body motion tracking through a 6/9 axes Inertial Measure-
ment Units (IMU), and 10 internal motion capture sensors
operating at a sampling rate of 100 Hz. The distribution of
the electrodes on the suit is anatomically informed such that
each major muscle has an electrode (Fig. 4b). EMS is con-
trolled through frequency, amplitude, and pulse width inputs.
Note that the electrode pulse width is defined as the sequence
of turning the electrode on and off, which is different from the
carrier frequency (i.e., the frequency of the electrode when the
electrode is on).
(a) TESLASUIT. (b) Electrodes map.
Figure 4: Full-body haptic suit.
Spatial Audio
Spatial audio cueing is achieved through a closed-ear gaming
headset (Logitech Pro-G 50 mm), shown in Fig. 5. This head-
set is equipped with a hybrid mesh PRO-G driver measuring
1.97 inches (50 mm), ensuring good sound quality across a
frequency range of 20 Hz to 20 kHz (human perceptible range
for sounds).
Control Inceptor
The control inceptor used for tracking task is a Logitech X52
Pro joystick, shown in Fig. 6. The joystick has three degrees
of freedom: left/right, fore/aft, and twist. The joystick adopts
a base spring with constant stiffness.
Figure 5: Logitech Pro Gaming Headset
(a) Front. (b) Side.
Figure 6: Logitech X52 Pro joystick.
Foggles
To simulate DVE, commercial off-the-shelf glass lenses were
used which combined two filters: low contrast and blur glare
cataract effects (Fig. 7a). The simulated pilot vision in DVE
is shown in Fig. 7b.
MULTIMODAL CUEING
Haptic and audio cueing strategies are based on those of
(Refs. 29, 30). The general form of these strategies is a
proportional-derivative (PD) compensation on the roll attitude
tracking error:
M=KPeφ+KD˙eφ+b(4)
where Mis a parameter representing the cued error, eφis the
roll attitude tracking error, and bis a bias. The attitude track-
ing error is defined as the difference between the commanded
and cued roll angles eφ=φcmd −φdis. Within the context of
4
(a) Lenses.
(b) Simulated DVE.
Figure 7: Foggles.
Fig. 3, the cued error ecorresponds to eφ, the desired com-
mand input iis φcmd and the vehicle output motion mis rep-
resented by φ. Note that in this compensatory tracking task
φcmd =0, such that effectively eφ=φdis. The cued bank an-
gle is given by the difference between the actual bank attitude
and some disturbance, such that φdis =φ−φSOS . The dis-
turbance is given by a sum of sines (SOS) signal and will be
discussed later in the paper. Additionally, KPand KDare the
proportional and derivative gains, respectively.
Haptic Cueing Algorithms
Continuous cueing of the roll attitude tracking error and/or
its time derivative was found to be an effective haptic cueing
strategy. The general form of this strategy is a proportional-
derivative (PD) compensation on the roll attitude tracking er-
ror:
POW =|KPeφ+KD˙eφ|+POWmin (5)
where POW is the TESLASUIT power and POWmin is an ar-
bitrary power offset. It is worth noting that the power is pro-
portional to the EMS channel amplitude and can be adjusted
for each subject based on their perception of EMS. Minimum
and maximum power are calibrated for each subject such that
minimum power corresponds to a barely perceivable stimulus,
and maximum power corresponds to the maximum stimulus
that is perceived by the pilot as not annoying and does activate
muscle contraction. The proportional gain is tuned based on
the error bounds for Adequate performance (discussed later in
the paper), such that:
KP=(POWmax −POWmin)
eφAdq
(6)
where POWmax =100% and POWmin =30%. This way, the
TESLASUIT power will reach 100% if for eφ=eφAdq. The
derivative gain is tuned by trial and error. An integral gain
was considered in the early stages of development of the al-
gorithm, but was eventually discarded as it did not appear to
yield PVS performance improvements, nor seemed to be liked
by the test subjects. The location and number of the elec-
trodes to be activated depends on the sign of KPeφ+KD˙eφ
and on the magnitude of POW. If KPeφ+KD˙eφ>0 and
|KPeφ+KD˙eφ|+POWmin ≤100%, then a single electrode
on the right shoulder is activated (8a). In the case where
|KPeφ+KD˙eφ|+POWmin >100% then, a second tactor on
the upper right arm is activated in addition to that on the
right shoulder (8b). Vice versa, if KPeφ+KD˙eφ<0 and
|KPeφ+KD˙eφ|+POWmin ≤100%, then a single electrode on
the left shoulder is activated (8c). A second electrode on the
upper left arm is activated in case |KPeφ+KD˙eφ|+POWmin >
100% (8d). Based on this strategy, the pilot will perceive a
haptic feedback in the direction of the error or, equivalently,
in the direction of the control action to be taken to reduce the
error. For instance, assume the actual roll attitude to be φ>0
(right wing down) the commanded roll attitude to be wings
level, i.e.,φcmd =0. Additionally, assume that ˙eφ=0. Then,
eφ<0, which activates the electrodes on the left side of the
upper body. As such, the pilot feels they should make a cor-
rective action to the left to bring the aircraft back to level. This
cueing strategy is summarized in Table 1.
It is worth noting that this cueing strategy differs from that
in (Refs. 25–28) in that these algorithms: (i) were based on
cueing of translational position, velocity, and acceleration; (ii)
cued position, velocity, and acceleration separately; and (iii)
adopted a binning, non-continuous strategy to cue the magni-
tude of position, velocity, and acceleration.
Spatial Audio Cueing Algorithms
A hybrid continuous-binning cueing of the roll attitude track-
ing error and/or its time derivative is found to be an effective
audio cueing strategy. The general form of this strategy is a
proportional-derivative (PD) compensation on the roll attitude
tracking error:
M=KPeφ+KD˙eφ(7)
It is worth noting that the audio gain is adjusted for each sub-
ject based on their perception of sound. The maximum gain
corresponds to the maximum stimulus that is perceived by the
pilot as not annoying and harmful to the ear. The proportional
gain is tuned based on the error bounds for Desired and Ade-
quate performance (discussed later in the paper). The propor-
tional gain KPis chosen to give M=50% at the Desired error
threshold of 5 deg and M=100% at the Adequate error toler-
ance threshold of 10 deg. This implies KP=10%/deg and is
5
(a) Right shoulder electrode. (b) Left shoulder + upper right
arm electrodes.
(c) Left shoulder electrode. (d) Left shoulder + upper left arm
electrodes.
Figure 8: Haptic cueing algorithm using a TESLASUIT.
Table 1: Haptic cueing strategy.
Location Power [%] Condition 1 Condition 2
Right shoulder |KPeφ+KD˙eφ|+POWmin POW ≤100% KPeφ+KD˙eφ>0
Right shoulder + upper right arm 100% POW >100 KPeφ+KD˙eφ>0
Left shoulder |KPeφ+KD˙eφ|+POWmin POW ≤100% KPeφ+KD˙eφ<0
Left shoulder + upper left arm 100% POW >100 KPeφ+KD˙eφ<0
found by:
KP=(SRmax −SRmin)
eφAdq −eφDes
(8)
where SRmax =100% and SRmin =50%. This way, the dis-
played error Mwill reach 100% if for eφ=eφAdq . The deriva-
tive gain is tuned by trial and error. An integral gain was not
found to yield any particular benefits to the cueing strategy
and was therefore discarded. The azimuth location and sound
wave frequency to be generated depend, respectively, on the
sign and the magnitude of M. When M>0, a sound signal
is played at the right ear whereas for M<0 a sound signal
is played at the left ear. The parameter Mis then used to
calculate the frequency of the sound generated by the slab3D
software (Ref. 36). The sample rate is 44100 Hz and the pulse
duration is set to 0.03 sec. Based on this strategy, the pilot will
perceive a sound in the direction of the error or, equivalently,
in the direction of the control action to be taken to reduce the
error. The sound signal amplitude is given by:
A=sin(2πωt)(9)
where ωis the sound signal frequency. The sound frequency
is chosen via a binning strategy consisting of five frequency
bins. These bins are chosen so as to make the jump between
each bin easily perceivable by the pilot. The relation between
the displayed frequency and Mis exponential for the three
middle levels and constant for the lower and upper levels as
described in Table 2. Moreover, sound signals are cued in
pulses for low and mid values of Mwith a fixed pulse width
and a linearly varying inter-pulse interval. For higher values
of M, the inter-pulse interval is zero (i.e., no time gap be-
tween pulses). As such, the refresh rate of the audio signal is
the summation of the pulse width and the inter-pulse interval,
resulting in an adaptive proportional relation with respect to
the value of the error and error rate.
A bow tie display for compensatory tracking task is used in
this research. Three visual indicators are used as symbology
and are shown in Fig. 9. The green line corresponds to the
displayed bank angle. The orange line represents the desired
bank angle. The magenta lines indicate the boundaries for De-
sired and Adequate performance and are fixed to / move with
the orange line. More specifically, the inner portion bound-
ary of the magenta indicator corresponds to the Desired per-
formance boundary, whereas the outer portion corresponds to
the Adequate performance threshold. The displayed bank at-
titude is the difference between the actual roll attitude and the
disturbance from the sum-of-sines (SOS) forcing function de-
scribed in the next section:
φdis =φ−φSOS (10)
Note that the command indicator value corresponds to the
fixed horizon reference.
Consider the case shown in Fig. 9, where the displayed bank
angle is φdis =5 deg. Also, assume zero roll rate error, i.e.,
˙eφ=0. Then, a proportional gain KP=9 yields M=−45%.
According to the binning strategy, this results in a Level 3 fre-
quency range, with ω=1231.14 Hz and an inter-pulse inter-
val equals to 0.11 sec. Since M<0, the sound signal is cued
to the pilot in their left ear (i.e., with an azimuth ψ=−90
deg). This prompts the pilot to make a left stick correction to
compensate for the error and bring the aircraft back to level.
This cueing strategy is summarized in Table 2. This audio
cueing strategy build on the work in (Ref. 39).
6
Table 2: Audio cueing strategy.
Threshold Frequency Condition on Pulse-Width Inter-Pulse Condition on
[Hz] Frequency [%] [sec] [sec] Inter-Pulse
Interval [%]
Level 1 110 0 <|M|<5 0.03 −0.002|M|+0.2 0 <|M|<65
Level 2 220 ·2|M|
50 5⩽|M|<30 0.03 −0.002|M|+0.2 0 <|M|<65
Level 3 1000 ·2|M|−30
50 30 ⩽|M|<100 0.03 0 |M| ≥ 65
Level 4 2880 ·2|M|−100
50 100 ⩽|M|<150 0.03 0 |M| ≥ 65
Level 5 6880 |M| ≥ 150 0.03 0 |M| ≥ 65
Figure 9: Bow tie display
EXPERIMENTAL DESIGN
This study’s experiments involve a precision, non-aggressive,
closed-loop tracking task in the roll axis only where the forc-
ing function (command input) is chosen to be a sum of signs
(SOS), as per (Ref. 34). The SOS forcing function is used to
drive the compensatory tracking task through which the pilot
attempts to minimize the displayed error within desired per-
formance constraints. A Fibonacci series-based SOS input is
designed to emphasize the frequency range that encompasses
key vehicle dynamics and typical human pilot control action,
ranging from 0.3 to approximately 6 rad/s. The length for
scoring time is 60 seconds, with a 10 second ramp in/ramp
out period on each end of the 60 second “for score” window.
Five trials are conducted for each cueing modality. Each of
these five trials used a different sum of sines, where the sum
of sines are based on randomly-generated phasing angles and
stored offline. The five sum of sines are the same across each
cueing modality and are presented to the test subject in the
same order (Ref. 34). Experiments make use of four subjects,
one of which is a test pilot. Desired performance is defined
such that the tracking error on the roll attitude be less than
5 deg for at least 50% of the scoring time. Adequate perfor-
mance requires the roll attitude tracking error to be less than
10 deg for at least 75% of the scoring time. These require-
ments are summarized in Table 3.
The joystick is positioned between the legs of the test subject,
who is seated with their eyes at approximately 1 meter from
the computer screen. The central positioning of the joystick
is justified by the effort in replicating the use of a cyclic stick
in a conventional utility helicopter. By the same rationale, the
vertical position of the joystick is adjusted to have the test
subjects rest their elbow on their thigh when operating the
joystick. This setup is shown in Fig. 10.
Figure 10: Experimental setup.
The following eleven cueing modalities are tested:
1. Visual: The pilot performs the compensatory tracking
task by looking at the visual display only, where the vi-
sual display is the bow tie display shown in Fig. 9.
2. Visual DVE: Here, visual cues are degraded by the use
of foggles to simulate a DVE.
3. Visual + Audio: Information about the tracking error is
provided to the pilot via the visual (bow tie) display and
7
Table 3: Desired and Adequate performance metrics.
Performance Tracking Error Bounds [deg] Scoring Time [%]
Desired |eφ|<5≥50
Adequate |eφ|<10 ≥75
via stereo audio cueing.
4. Visual DVE + Audio: This is similar to the cueing
modality above, but visual cues are degraded by the use
of foggles.
5. Visual + Haptic: Information about the tracking error is
provided to the pilot via the visual (bow tie) display and
via full-body haptics.
6. Visual DVE + Haptic: This is similar to the cueing
modality above, but visual cues are degraded by the use
of foggles.
7. Visual + Audio + Haptic: Information about the track-
ing error is provided to the pilot via: (i) the visual (bow
tie) display, (ii) spatial audio cueing, and (iii) full-body
haptics.
8. Visual DVE + Audio + Haptic: This is similar to the
cueing modality above, but visual cues are degraded by
the use of foggles.
9. Audio: Information about the tracking error is provided
to the pilot via spatial audio cueing only while the pilot
is blindfolded (Fig. 10).
10. Haptic: Information about the tracking error is provided
to the pilot via full-body haptics only while the pilot is
blindfolded.
11. Audio + Haptic: Information about the tracking error is
provided to the pilot via combined spatalized audio and
full-body haptics while the pilot is blindfolded.
This test matrix is summarized in Table 4. It is worth not-
ing that derivative-only compensation was discarded a priori
as test subjects incurred in pilot-induced oscillations (PIOs).
Additionally, proportional-only compensation was also dis-
carded a priori in light of the results of (Ref. 29). In this
study, PD compensation was shown to yield higher perfor-
mance than P-only compensation. The aircraft dynamics is
chosen to be representative of the roll dynamics of a con-
ventional utility helicopter similar to a UH-60 with Level 1
handling qualities. The specific transfer function used in this
study is:
Yc(s) = φ
δlat
(s) = Lδlat
s(s−Lp)(11)
where Lp=−3.5 1/s is the roll acceleration due to the roll
rate and Lδlat =0.147 rad/(s2-%) is the roll acceleration due
to a lateral stick displacement. The SOS code and visual cue-
ing interface of (Ref. 34) are developed in C/C++, along with
interface between these two, the spatial audio slab3D soft-
ware (Ref. 36), and the aircraft dynamics. The aircraft dynam-
ics is implemented in MATLAB®/Simulink and subsequently
compiled to C/C++ code.
PARAMETRIC IDENTIFICATION
Parametric identification studies make use of the CIFER®
(Ref. 40) software tool. The identification procedure is based
on a two-step process. First, frequency responses of the air-
craft output to the tracking error are extracted from piloted
simulation data. Next, state-space models are identified from
the frequency response data.
Pilot Vehicle System Dynamics
Consider the PVS dynamics of Eq. (1):
YOL(s) = φ
eφ
(s) = ωc
se−sτe(12)
To be suitable for state-space parametric identification, these
dynamics are transformed into state-space form:
˙
φ=0φ+ωceφ(t−τe)(13)
As such, the parameters to be identified from input-output
data are ωcand τe.
Pilot Dynamics
Consider now expressing the PVS dynamics of Eq. (1) in
terms of the pilot and aircraft dynamics parameters:
YOL(s) = φ
eφ
(s) = YpYc=LδlatKp(TLs+1)
−Lps(TIs+1)e−sτe(14)
≈Lδlat Kp
−Lpse−sτe(15)
since TLis approximately equal to TIand TI≈1/Lp(Ref. 33).
It is worth noting that ωc=−Lδlat Kp
Lp
, such that the pilot gain
Kpcan be identified based on the crossover frequency ωciden-
tified with the model of Eq. (13).
RESULTS
Pilot-Vehicle System Performance
After tuning, the PD gains used for the set of experiments
object of this analysis are KP=9 %/deg and KD=4 %/(deg-s)
8
Table 4: Test matrix.
Experiment # Cueing Modality Audio Gains Haptic Gains
KPKDKPKD
[%/deg] [%-s/deg] [%/deg] [%-s/deg]
1 Visual - - - -
2 Visual DVE - - - -
3 Visual + Audio 9 4 - -
4 Visual DVE + Audio 9 4 - -
5 Visual + Haptic - - 7 5
6 Visual DVE + Haptic - - 7 5
7 Visual + Audio + Haptic 9 4 7 5
8 Visual DVE + Audio + Haptic 9 4 7 5
9 Audio 9 4 - -
10 Haptic - - 7 5
11 Audio + Haptic 9 4 7 5
0 10 20 30 40 50 60
-20
0
20
Input Actual
0 10 20 30 40 50 60
-20
0
20
0 10 20 30 40 50 60
-20
0
20
Figure 11: Example time history for a non-aggressive
compensatory tracking task with audio-only cueing and PD
compensation.
for spatial audio cueing and KP=7 %/deg and KD=5 %/(deg-
s) for haptics. Figure 11 shows an example time history for
a non-aggressive compensatory tracking task with an audio-
only cueing modality and a PD compensation.
The Desired performance success rate for each cueing modal-
ity is shown in Fig. 12. More specifically, Fig. 12a shows
the success rate for each test subject, whereas Fig. 12b shows
the average success rate across all test subjects. Various ob-
servations are made. First, visual cueing in DVE is associated
with a lower success rate compared to non-DVE visual cue-
ing. This is indicative of the fact that foggles are an effective
way to decrease PVS performance due to the degraded visual
perception of the pilot. Second, bimodal cueing, i.e., when
visual cueing is augmented with either audio or haptic cue-
ing, yields higher performance than visual-only cueing. This
is true for both DVE and non-DVE visual cueing. Third, tri-
modal cueing, i.e., when visual cueing is augmented with both
audio and haptic cueing, shows higher success rate than bi-
modal cueing. This is also true for both DVE and non-DVE
visual cueing. This indicates that, for the task in consider-
ation, augmenting vision with both audio and haptic cueing
is preferable to augmenting vision with audio or haptic cue-
ing. Further, these results suggest that the performance lost
from degraded vision is partially recovered through the use
of audio and/or haptic cueing. Fourth, audio- and haptic-only
cueing provides satisfactory performance in averaged terms,
although trials from some test subjects resulted in scores be-
low the Desired success rare. This constitutes a significant
result in that it suggests that the task in consideration can be
flown without the primary sense of vision, but rather with a
secondary perceptual channel like spatial audition or haptics.
Sixth, the combination of audio and haptic feedback in the
case of no visual cues yields higher success rate than audio-
and haptic-only modalities.
Figure 13 shows the Adequate performance success rate for
each test subject and cueing modality. This figure shows that
most cueing modalities yield a success rate close to 100%,
except for audio only, haptic only, and combined audio and
haptic cueing. Still, these three modalities show success rates
higher than the Adequate performance threshold and further
substantiate the conclusion that the task in consideration can
be flown without the primary sense of vision, but rather the
sole use of secondary sensory cues.
Frequency-Domain Analysis
While time-domain Desired and Adequate metrics for task
performance might be representative of the performance of
the haptic and spatial audio cueing strategies developed
herein, it is important to also perform a frequency-domain
analysis to obtain a better understanding of these haptic and
spatial audio cueing laws. As such, this section focuses on the
frequency responses identified from input-output data. The
frequency responses shown are those for the open-loop pilot
vehicle dynamics of Eq. (1), i.e., for φ
eφ
(s). These frequency
9
40
50
60
70
80
90
100
Success Rate [%]
Audio + Haptic
Haptic
Audio
Visual DVE + Audio + Haptic
Visual + Audio + Haptic
Visual DVE + Haptic
Visual + Haptic
Visual DVE + Audio
Visual + Audio
Visual DVE
Visual
(a) Desired performance.
40
50
60
70
80
90
100
Success Rate [%]
Audio + Haptic
Haptic
Audio
Visual DVE + Audio + Haptic
Visual + Audio + Haptic
Visual DVE + Haptic
Visual + Haptic
Visual DVE + Audio
Visual + Audio
Visual DVE
Visual
(b) Desired performance (averaged).
Figure 12: Desired performance success rates.
responses are averaged across all pilots to be representative
of the mean behavior of all test subjects. Figure 14 shows
the open-loop PVS response for different cueing modalities.
Figure 14a shows the frequency response of all cueing modal-
ities. First, those modalities associated with the lower static
gain are audio-only, haptic-only, and combined audio and hap-
tic cueing. On the other hand, those frequency responses with
the higher static gain are those associated with multimodal
cueing. Gain differences between different cueing modalities
appear to remain approximately constant up to the crossover
frequency of each modality. The same applies for phase lag,
except for visual cueing only. In fact, while the phase lag
across most cueing modalities is similar, visual cueing only
yields a significantly lower phase at low frequencies (up to
approximately 1 rad/s). Consider now only those frequency
responses corresponding to visual only, and visual cueing aug-
mented with both audio and haptic cues, as shown in Fig. 14b.
The frequency response corresponding to visual-only cues in
DVE shows a significantly lower gain than that for visual-only
cueing in non-DVE conditions. Again, this is an indication
that simulating DVE with foggles is an effective strategy to
degrade PVS performance. Augmenting the pilot’s percep-
tion with audio and haptic cues in DVE restores the pilot gain
to a gain approximately corresponding to that of visual-only
cueing in non-DVE. This is consistent with the previous per-
formance analysis, suggesting that augmenting the pilot’s vi-
sual perception in DVE with secondary sensory cues helps
recuperate that part of the performance lost because of DVE.
Lastly, combined visual, audio, and haptic cueing in non-DVE
conditions shows an overall higher gain than visual-only cue-
ing, indicating that, for the task in consideration, multimodal
cues enhance the PVS performance compared to visual-only
cueing. These observations are in line with previous studies
on multimodal cueing (Ref. 41).
Identified Pilot-Vehicle System Dynamics
Next, frequency-domain identification is performed both on
the open-loop pilot vehicle and on the pilot dynamics. The fre-
quency range used for the identification process is 0.5≤ωc≤
4 rad/s for non-DVE visual-only cueing tasks, 0.5≤ωc≤3
rad/s for DVE visual and combined visual and audio cueing
tasks, and 0.5≤ωc≤2 rad/s for haptic only, audio-only,
and combined haptic and audio cueing tasks. The average
cost functions associated with the identification of each cue-
ing modality for each subject were always less than J=100
and typically less than J=50. An average cost function of
J≤100 reflects an acceptable level of accuracy for flight
dynamics modeling, whereas a cost function of J≤50 can
be expected to produce a model that is nearly indistinguish-
able from the original both in the frequency and time do-
mains (Ref. 40). However, some of the individual cost func-
tions can reach J≤200 without resulting in a noticeable loss
10
40
50
60
70
80
90
100
Success Rate [%]
Audio + Haptic
Haptic
Audio
Visual DVE + Audio + Haptic
Visual + Audio + Haptic
Visual DVE + Haptic
Visual + Haptic
Visual DVE + Audio
Visual + Audio
Visual DVE
Visual
Figure 13: Adequate performance success rates.
of overall predictive accuracy.
Figure 15 shows the PVS averaged (across all pilots) identi-
fied crossover frequency and effective time delay, along with
their relation to Desired performance success rate, for all cue-
ing modalities. Figure 15a shows the relationship between
Desired performance success rate and effective time delay.
This figure shows that those modalities yielding lower success
rates are associated with higher effective time delays. Simi-
larly, modalities that give higher success rates are associated
with lower effective time delays. For instance, low success
rates and time delays are associated with from only secondary
cues. On the other hand, high success rates are achieved with
augmenting visual cues with haptics and/or audio cues. No-
tably, combined visual, audio, and haptic cueing gives sig-
nificantly lower effective time delay and higher success rate
than visual-only cueing, indicating that the adoption of sec-
ondary sensory cues is beneficial toward pilot-vehicle perfor-
mance. Figure 15b shows the relationship between Desired
performance success rate and crossover frequency. Similarly
to effective time delay, there is a distinct relationship between
success rate and crossover frequency (approximately linear
in the log10 of the frequency) where the use of secondary
cues only yields low crossover frequencies and success rates,
while multimodal cueing is associated with high crossover
frequencies and success rates. Lastly, Fig. 15c shows effective
time delay versus crossover frequency. Here, secondary sen-
sory cues yield high effective time delays and low crossover
frequencies whereas multimodal cueing results in low effec-
tive time delays and high crossover frequencies. Worthy of
note is the fact that augmenting visual cueing with audio and
haptic cues yields significantly lower time delay and higher
crossover frequency than visual cueing alone. Moreover, the
use of audio and haptic cueing in DVE restores the crossover
frequency lost because of degraded vision. This indicates that
the augmentation of vision with secondary sensory cues might
be particularly beneficial in DVE conditions in that it com-
pensates for degraded vision. While plots of the pilot gain
are not shown here, the pilot gain is linearly related to the
crossover frequency in the assumed pilot model via the fol-
lowing: Kp=−ωcLp
Lδlat
. This result is obtained by equating Eq.
(1) with the product of Eqs. (2) and (3). As such, pilot gain
can easily be reconstructed from the data presented.
Next, the tracking error normalized by the standard deviation
of the SOS input is plotted against crossover frequency and
effective time delay, as shown in Fig. 16. Figure 16a shows
how the tracking error generally decreases and crossover fre-
quency increases with increasing multimodal cues. In fact,
the combination of visual, audio, and haptic feedback yields
the lowest tracking error and highest crossover frequency. It
is also worth noting that the augmenting the pilot’s vision
with auditory and haptic cues in DVE leads to roughly the
same tracking error and crossover frequency of visual-only
cueing. This further suggests that pilot-vehicle performance
in DVE can be restored via multimodal cueing. Interestingly,
tracking error appears to decrease approximately linearly with
crossover frequency where crossover frequency is plotted on
a log-10 scale. Figure 16b shows how both tracking error and
effective time delay decrease for increasing multimodal cues.
This suggests haptic and spatial audio cues help reduce pilot
effective time delay which, in turns, yields lower tracking er-
ror. A somewhat linear trend can also be seen for tracking
error versus effective time delay, although this linear trend is
not as accentuates as for the crossover frequency.
These results, and in particular the inverse relationship be-
tween the crossover and normalized error (Fig. 16a), agree
with the results in (Ref. 42). This paper shows that crossover
(for a given task) is proportional to the ratio of error rate to
error – which is independent of time delay, since time delay
has the same relative effect between error and its rate. If one
considers Fig. 15, a similar inverse relationship to what you
have for crossover versus error. Each phase data point is as-
sociated with crossover for a given test configuration, which
indicates that crossover for a given sensory combination is in-
versely proportional to phase at that crossover. Bottom line: a
sensory combination associated with higher crossover reduces
phase loss at that frequency relative to a sensory combination
associated with a lower crossover. This is confirmed in Fig.
14 for frequencies ω>1.5 rad/s. The clues at to why this
is lie in the fact that the pilot must provide frequency com-
pensation for the visual channel, whereas both the audio and
haptic channels provide quickened displays. Looking at Fig.
14, these displays flatten out the open-loop response prior to
crossover (haptic break frequency is 1.4 rad/s and audio is
2.25 rad/s obtained from KPand KD). Each of the two chan-
nels creates lead at these break frequencies in order to extend
11
0.5 1 1.5 2 2.5 3 3.5 4
0
5
10
15
Mag [dB]
Visual
Visual DVE
Visual + Audio
Visual DVE + Audio
Visual + Haptic
Visual DVE + Haptic
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
Audio
Haptic
Audio + Haptic
0.5 1 1.5 2 2.5 3 3.5 4
-180
-135
-90
Phase [deg]
0.5 1 1.5 2 2.5 3 3.5 4
Frequency, [rad/s]
0
0.5
1
Coh
(a) All cueing modalities.
0.5 1 1.5 2 2.5 3 3.5 4
0
5
10
15
Mag [dB]
Visual
Visual DVE
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
0.5 1 1.5 2 2.5 3 3.5 4
-180
-135
-90
Phase [deg]
0.5 1 1.5 2 2.5 3 3.5 4
Frequency, [rad/s]
0
0.5
1
Coh
(b) Visual and combined visual, audio, and haptic cueing
modalities in both GVE and DVE.
Figure 14: PVS averaged crossover frequency, effective time delay, and Desired performance success rate.
their crossovers (otherwise the phase loss due to their large
central nervous system (CNS) processing time delays, rela-
tive to the visual channel, would constrain their crossover to
much lower frequencies). One can assert that their process-
ing times are larger than the visual’s by comparing the phase
losses of the three channels above 1.5 rad/s. The injection
of lead by the non-visual channels when used with visual ap-
pears to reduce the visual processing time, despite their in-
dividual larger processing times. The decrement in effective
time delay going from V to V+H, added to the the time de-
lay decrement going from V to V+A, approximately equals
the time delay decrement going from V to V+H+A. This is a
notable result, in that each of the non-visual channels’ contri-
butions to the visual channel’s processing delay reduction are
additive when used together. This also serves to explain the
behavior observed when non-visual cues are used with DVE.
The reduced contrast and blurring associated with the DVE
condition introduce significant thresholding effects (which re-
duces gain, hence crossover, and increases processing time),
but the non-visual cues reduce phase loss thus allowing the
pilot to increase his gain (crossover). The non-visual cues
are so effective as to restore DVE performance to Visual-only
performance. It is worth noting that the CNS processing time
being referred to is associated with the modified Structural
Model (SM) of the pilot (Ref. 38), the Crossover Model’s ef-
fective time delay lumps the SM’s processing time delay and
the neuromuscular phase loss effects into a single time delay
at crossover.
The PVS stability margins are calculated from the identified
open-loop PVS crossover model. Figures 17a and 17b show
the gain and phase margins of the open-loop PVS system for
the different cueing methods, respectively. The mean stability
margins are similar for all cases (Gm ≈5 dB and Pm ≈35),
which are typical numbers for PVS (Ref. 43), and similar
to the assumed values used in the ADS-33 (Ref. 44) piloted
bandwidth requirements (6 dB and 45 deg). It is clear that the
subjects adjust their gain (PVS crossover frequqency) based
on the equivalent loop delay such that they operate with simi-
lar stability margins for all cases. Increasing their gain further
to try to achieve better performance will result in a reduction
in stability margins, a more oscillatory closed-loop responses,
and ultimately worse performance.
12
0.3 0.4 0.5 0.6
Effective time delay, e [s]
40
50
60
70
80
90
100
Success Rate [%]
Visual
Visual DVE
Visual + Audio
Visual DVE + Audio
Visual + Haptic
Visual DVE + Haptic
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
Audio
Haptic
Audio + Haptic
(a) Desired success rate vs. effective time delay.
1 2 3 4
Crossover Frequency, c [rad/s]
40
50
60
70
80
90
100
Success Rate [%]
Visual
Visual DVE
Visual + Audio
Visual DVE + Audio
Visual + Haptic
Visual DVE + Haptic
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
Audio
Haptic
Audio + Haptic
(b) Desired success rate vs. crossover frequency.
1 2 3 4
Crossover Frequency, c [rad/s]
0.3
0.35
0.4
0.45
0.5
0.55
0.6
Effective Time Delay, e [s]
Visual
Visual DVE
Visual + Audio
Visual DVE + Audio
Visual + Haptic
Visual DVE + Haptic
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
Audio
Haptic
Audio + Haptic
(c) Effective time delay vs. crossover frequency.
Figure 15: PVS averaged crossover frequency, effective time delay, and Desired performance success rate.
TEST PILOT QUALITATIVE FEEDBACK
This section reports the feedback from the test subject who is
a test pilot.
Full-Body Haptics
The test pilot comments on the full-body haptics algorithms
are articulated below:
1. The vision restricting devices (foggles) were mission-
representative in that they gave the perception of flying
13
1 2 3 4
Crossover Frequency, c [rad/s]
0.3
0.4
0.5
0.6
0.7
Tracking Error (norm)
Visual
Visual DVE
Visual + Audio
Visual DVE + Audio
Visual + Haptic
Visual DVE + Haptic
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
Audio
Haptic
Audio + Haptic
(a) Normalized tracking error vs. crossover frequency.
0.4 0.45 0.5 0.55
Effective Time delay [sec]
0.3
0.4
0.5
0.6
0.7
Tracking Error (norm)
Visual
Visual DVE
Visual + Audio
Visual DVE + Audio
Visual + Haptic
Visual DVE + Haptic
Visual + Audio + Haptic
Visual DVE + Audio + Haptic
Audio
Haptic
Audio + Haptic
(b) Normalized tracking error vs. effective time delay.
Figure 16: PVS averaged crossover frequency, effective time delay, and tracking error.
in and out of the clouds and contributed to spatial disori-
entation. It was significantly harder to track commanded
roll attitude with visual feedback only. Often it was nec-
essary for the attitude to exceed the bow tie boundary
before a deviation was visually apparent, requiring large
roll corrections in the correct direction. However, large
and rapid control inputs are not a recommended control
strategy in degraded visual environments.
2. PD haptic compensation coupled with visual feedback
was initially confusing because the haptic signal might
not switch signs when the visual attitude indicator
crossed the commanded attitude line, depending on the
error of the tracking error derivative. This contributed
to initial mistrust and hesitation in response in the haptic
cueing algorithm. However, with some practice, it was
favored over proportional- and derivative-only compen-
sation, and performance improved.
3. Regarding the perceived workload, the pilot reported that
combined visual and haptic cueing required more “men-
tal reserve” in order to decide which one to trust. This
could be attributed to several factors. The first possibil-
ity is mistrust in the novel cueing method presented to the
pilot, possibly curable with practice. The second factor
could be non-congruent haptic and visual cueing, per-
haps due to different delays between visual and haptic
systems. A third explanation is that accuracy and per-
ception of visual perception in DVE may be reduced to
a point where they are comparable with those of haptic
cueing, so there is no longer a dominant cue between vi-
sion and haptics. This would force the pilot to choose
which one to trust, adding to cognitive workload, espe-
cially if the cues were conflicting or incongruent.
4. The addition of haptic cueing was received favorably by
the pilot, particularly in DVE. Haptic cueing helped re-
store the perception of both the tracking error and its rate
of change.
5. The test pilot reported an increased sensitivity to auditory
noise when using haptic-only feedback. The amount of
mental reserve that could be applied to the haptic feed-
back probably increased, however.
6. When flying with haptic cueing only, good performance
(as evaluated with Adequate and Desired performance
metrics) came with a feeling of “having done terribly”
and of being in a “massive pilot-induced oscillation
(PIO)” while poor performance was accompanied by a
feeling of having done well. This will be investigated
further in subsequent experiments.
7. At first the electro-stimulation feedback felt awkward to
the pilot, but the system made significantly more sense
when flying completely blind. The activation of one
shoulder muscle group by the suit easily triggered the
brain to activate the right arm and to move laterally in
14
0
5
10
15
GM [dB]
Visual DVE + Haptic
Visual + Audio + Haptic
Visual + Audio
Visual DVE + Audio + Haptic
Haptic
Audio + Haptic
Visual
Visual DVE
Audio
Visual DVE + Audio
Visual + Haptic
Pilot 1
Pilot 2
Pilot 3
Pilot 4
Mean
(a) Gain Margin.
0
20
40
60
80
PM [deg]
Visual DVE + Haptic
Visual + Audio + Haptic
Visual + Audio
Visual DVE + Audio + Haptic
Haptic
Audio + Haptic
Visual
Visual DVE
Audio
Visual DVE + Audio
Visual + Haptic
Pilot 1
Pilot 2
Pilot 3
Pilot 4
Mean
(b) Phase Margin.
Figure 17: PVS stability margins for different cueing modalities.
that direction. The association was logical and somewhat
easily executed with minimal training.
Spatial Audio
The test pilot added the comments below about the use of spa-
tialized audio cueing:
1. The sounds were slightly confusing at the beginning but
after a while it became understandable.
2. The frequency levels were distinguishable against the at-
titude error and error rate. Also, the decreasing in the
inter-pulse interval was noticeable as the error increase
rapidly. The high pitch was very helpful when exceeding
the adequate error tolerance as an alert after a warning.
3. The 3-D audio were very helpful in degraded vision with
foggles to know the initial direction of the error and its
rate. With spatial audio one can immediately move the
stick based on what direction and frequency are heard
when the visuals are not clear enough to trust.
4. “Sometimes the direction switched very quickly between
left and right and accompanied with high frequency giv-
ing the impression of doing bad tracking but in fact that
is not necessarily correct.” That is because the rate feed-
back can change very fast according to the sum of sines
signal.
5. The sound occurs at zero or low errors (when the indi-
cator is at or close to the reference) is slowly changing
and the bass (low frequency) is not motivating enough to
move the stick and the pilot may wait and put the focus
on larger error signals. This harms the desired success
rate. However, if the sound was quicker and higher, the
pilot may overshoot than the needed stick action.
Combined Haptic and Spatial Audio Cueing
1. Full-body haptics and spatial audio cues complement
each other well. Right and left signals from audio and
haptic cues were synchronized well.
2. Adding haptic cues to audio makes it easier to determine
the direction of the correction compared to audio only.
3. “Mental reserve felt less than for bimodal cueing, i.e.,
combined visual and audio, or combined visual and hap-
tic cues. I did not feel overloaded with combined visual,
haptic, and audio cues. Hearing and feeling would match
with what I was supposed to do as a corrective action.”
15
4. One may not pay attention to secondary cues in a non-
DVE condition. However, attention to secondary cues
grows as visual cues are degraded or denied.
5. Significantly lower mental reserve needed for combined
audio and haptic cues in denied visual environments.
One seem to be able to focus more without vision, which
makes the task less demanding to perform from a men-
tal capacity standpoint. However, the task in consider-
ation is not representative of a radio environment in a
helicopter.
6. Denying primary perception seemed to improve sec-
ondary cueing modalities.
CONCLUSIONS
Full-body haptic and spatial audio cueing algorithms were de-
veloped, implemented, and tested for augmented pilot percep-
tion. Full-body haptic cueing algorithms are based on lo-
calized electrical muscle stimulation (EMS) obtained via a
commercial, off-the-shelf full-body haptic suit. Spatial au-
dio algorithms are based on 3-D audio obtained via a com-
mercial, off-the-shelf closed-ear headset and the slab3D soft-
ware (Ref. 36). Cueing algorithms were developed for roll-
axis compensatory tracking tasks where the pilot acts on the
displayed error between a desired input and the comparable
vehicle output motion to produce a control action. The track-
ing error was displayed to the pilot using different cueing
modalities which are a combination of: visual, audio, and
haptic cues. Additionally, visual cues were considered in both
good and degraded visual environments (GVE/DVE). Experi-
ments that involved four test subjects, one of which was a test
pilot, were conducted to gather quantitative data and qualita-
tive feedback for analyzing the performance of the audio cue-
ing algorithms. Time- and frequency-domain analyses on the
test data were performed.
Based on this work, full-body haptic and spatial audio cueing
algorithms that are based on a proportional-derivative (PD)
compensation strategy on the tracking error were found to:
(i) provide satisfactory PVS performance for the task in con-
sideration when using haptic feedback only, audio feedback
only, and combined audio + haptic feedback (no visual cues),
and (ii) improve PVS performance, especially in degraded
visual environment (DVE) when using combined visual, au-
dio and/or haptic feedback. These results indicate that the
use of secondary sensory cues such as full-body haptics and
3-D sounds to augment the vehicle perception can lead to
improved/partially-restored PVS performance when primary
sensory cues like vision are impaired or denied. These results
are congruent with the principles of multisensory enhance-
ment, and inverse effectiveness, i.e., where the magnitude of
the multisensory enhancement is inversely proportional to the
reliability of the visual modality. Still, qualitative feedback
from the test pilot suggested that cognitive workload may vary
depending on the cueing modality. This will be the object of
future investigations.
ACKNOWLEDGEMENTS
This research was partially funded by the U.S. Government
under agreement no. N000142312067. The views and con-
clusions contained in this document are those of the authors
and should not be interpreted as representing the official poli-
cies, either expressed or implied, of the U.S. Government.
REFERENCES
1. Sweller, J., “Cognitive load during problem solving: Ef-
fects on learning,” Cognitive Science, Vol. 12, (2), 1988,
pp. 257–285. DOI: 10.1016/0364-0213(88)90023-7
2. Colonius, H., and Diederich, A., “A Maximum-
Likelihood Approach to Modeling Multisensory En-
hancement,” Advances in Neural Information Process-
ing Systems 14, Vancouver, Canada, Dec 3-8, 2001.
DOI: 10.7551/mitpress/1120.001.0001
3. Meredith, M. A., and Stein, B. E., “Spatial factors de-
termine the activity of multisensory neurons in cat su-
perior colliculus,” Brain Research, Vol. 365, (2), 1986.
DOI: 10.1016/0006-8993(86)91648-3
4. Reynal, M., Imbert, J. P., Arico, P., Toupillier, J., Borgh-
ini, G., and Hurter, C., “Audio Focus: Interactive spa-
tial sound coupled with haptics to improve sound source
location in poor visibility,” International Journal of
Human-Computer Studies, Vol. 129, 2019, pp. 116–128.
DOI: 10.1016/j.ijhcs.2019.04.001
5. de Stigter, S., Mulder, M., and van Paassen, M. M., “De-
sign and Evaluation of a Haptic Flight Director,” Journal
of Guidance, Control, and Dynamics, Vol. 30, (1), 2007,
pp. 35–46. DOI: 10.2514/1.20593
6. Brill, J. C., Lawson, B. D., and Rupert, A., “Tac-
tile Situation Awareness System (TSAS) as a Com-
pensatory Aid for Sensory Loss,” Proceedings of the
58th Annual Meeting of the Human Factors and Er-
gonomics Society, Chicago, IL, Oct 27–31, 2014.
DOI: 10.1177/154193121458121
7. Tzafestas, S. T., Birbas, K., Koumpouros, Y.,
and Christopoulos, D., “Pilot Evaluation Study of
a Virtual Paracentesis Simulator for Skill Training
and Assessment: The Beneficial Effect of Hap-
tic Display,” Presence: Teleoperators and Virtual
Environments, Vol. 17, (2), 2008, pp. 212–229.
DOI: 10.1162/pres.17.2.212
8. Miller, J. D., Godfroy-Cooper, M., and Szoboszlay,
Z. P., “Augmented-Reality Multimodal Cueing for Ob-
stacle Awareness: Towards a New Topology for Threat-
Level Presentation,” Proceedings of the 75th Annual Fo-
rum of the Vertical Flight Society, Philadelphia, PA,
May 13-16, 2019. DOI: 10.4050/F-0075-2019-14562
16
9. Begault, D., Wenzel, E. M., Godfroy-Cooper, M.,
Miller, J. D., and Anderson, M. R., “Applying Spatial
Audio to Human Interfaces: 25 Years of NASA Expe-
rience,” Proceedings of the 40th International AES Con-
ference, Tokyo, Japan, Oct 8-10, 2010.
10. Wenzel, E. M., and Godfroy-Cooper, M., “Advanced
Multimodal Solutions for Information Presentation,”
Technical report, NASA TM–20210017507, Ames Re-
search Center, Moffett Field, CA, 2021.
11. Deldycke, P. J., Van Baelen, D., Pool, D. M., van
Paassen, M. M., and Mulder, M., “Design and Eval-
uation of a Haptic Aid for Training of the Manual
Flare Manoeuvre,” AIAA 2018-0113, Proceedings of
the AIAA SciTech Forum, Kissimmee, FL, Jan 8–12,
2018. DOI: 10.2514/6.2018-0113
12. Fabbroni, D., Design and Evaluation of an Adaptive He-
licopter Trainer with Haptic Force-Feedback for Inex-
perienced Pilots, Ph.D. thesis, University of Pisa, May,
2017.
13. D’Intino, G., Olivari, M., Geluardi, S., Fabbroni, D.,
Buelthoff, H., and Pollini, L., “A Pilot Intent Estima-
tor for Haptic Support Systems in Helicopter Maneu-
vering Tasks,” AIAA 2018-0116, Proceedings of the
AIAA SciTech Forum, Kissimmee, FL, Jan 8–12, 2018.
DOI: 10.2514/6.2018-0116
14. Olivari, M., Nieuwenhuizen, F. M., B¨
ulthoff, H. H., and
Pollini, L., “Pilot Adaptation to Different Classes of
Haptic Aids in Tracking Tasks,” Journal of Guidance,
Control, and Dynamics, Vol. 37, (6), 2014, pp. 1741–
1753. DOI: 10.2514/1.G000534
15. D’Intino, G., Olivari, M., Buelthoff, H. H., and Pollini,
L., “Haptic Assistance for Helicopter Control Based
on Pilot Intent Estimation,” Journal of Aerospace In-
formation Systems, Vol. 17, (4), 2020, pp. 193–203.
DOI: 10.2514/1.I010773
16. Olivari, M., Nieuwenhuizen, F. M., B¨
ulthoff, H. H., and
Pollini, L., “An Experimental Comparison of Haptic and
Automated Pilot Support Systems,” AIAA 2014-0809,
Proceedings of the AIAA SciTech Forum, National Har-
bor, MD, Jan 13-17, 2014. DOI: 10.2514/6.2014-0809
17. Fabbroni, D., Geluardi, S., Gerboni, C. A., Olivari, M.,
Pollini, L., and Buelthoff, H. H., “QUASI-TRANSFER
OF HELICOPTER TRAINING FROM FIXED- TO
MOTION-BASE SIMULATOR,” Proceedings of the
43rd European Rotorcraft Forum, Milano, Italy, Sep 12-
15, 2017.
18. D’Intino, G., Pollini, L., and Buelthoff, H. H., “A 2-DoF
Helicopter Haptic Support System based on Pilot Intent
Estimation with Neural Networks,” AIAA 2020-0408,
Proceedings of the AIAA SciTech Forum, National Har-
bor, MD, Jan 6-10, 2020. DOI: 10.2514/6.2020-0408
19. Fabbroni, D., Geluardi, S., Gerboni, C. A., D’Intino, G.,
Pollini, L., and Buelthoff, H. H., “Design of a Haptic
Helicopter Trainer for Inexperienced Pilots,” Proceed-
ings of the 73th Annual Forum of the Vertical Flight So-
ciety, Fort Worth, TX, May 9-11, 2017.
20. Van Baelen, D., Ellerbroek, J., van Passen, M. M., and
Mulder, M., “Design of a Haptic Feedback System for
Flight Envelope Protection,” Journal of Guidance, Con-
trol, and Dynamics, Vol. 43, (4), 2020, pp. 700–714.
DOI: 10.2514/1.G004596
21. M¨
ullh¨
auser, M., and Leißling, D., “Development and In-
Flight Evaluation of a Haptic Torque Protection,” Jour-
nal of the American Helicopter Society,64, 012003
(2019). DOI: 10.4050/JAHS.64.012003
22. M¨
ullh¨
auser, M., and Lusardi, J., “US-german Joint In-
flight and Simulator Evaluation of Collective Tactile
Cueing for Torque Limit Avoidance: Shaker Versus Soft
Stop,” Journal of the American Helicopter Society,67,
032006 (2022). DOI: 10.4050/JAHS.67.032006
23. Sahasrabudhe, V., Horn, J. F., Sahani, N., Faynberg, A.,
and Spaulding, R., “Simulation Investigation of a Com-
prehensive Collective-Axis Tactile Cueing System,”
Journal of the American Helicopter Society, Vol. 51, (3),
2006, pp. 215–224. DOI: 10.4050/1.3092883
24. Jeram, G. J., and Prasad, J. V. R., “Open Architec-
ture for Helicopter Tactile Cueing Systems,” Journal
of the American Helicopter Society, Vol. 50, (3), 2005,
pp. 238–248. DOI: 10.4050/1.3092860
25. McGrath, B. J., “Tactile Instrument for Aviation,” Tech-
nical report, NAMRL Monograph 49, Naval Aerospace
Medical Research Laboratory, Pensacola, FL, 2000.
26. McGrath, B. J., Estrada, A., Braithwaite, M. G., Raj,
A. K., and Rupert, A. H., “Tactile Situation Aware-
ness System Flight Demonstration,” Technical report,
Army Aeromedical Research Laboratory, Fort Rucker,
AL, 2004.
27. Wolf, F., and Kuber, R., “Developing a head-
mounted tactile prototype to support situational
awareness,” International Journal of Human-
Computer Studies, Vol. 109, 2018, pp. 54–67.
DOI: 10.1016/j.ijhcs.2017.08.002
28. Jennings, S., Cheung, B., Rupert, A., Schultz, K., and
Craig, G., “Flight-Test of a Tactile Situational Aware-
ness System in a Land-based Deck Landing Task,” Pro-
ceedings of the Human Factors and Ergonomics Society
48th Annual Meeting, New Orleans, LA, September 20-
24, 2004. DOI: 10.1177/154193120404800131
29. Morcos, M. T., Fishman, S. M., Cocco, A., Saetti, U.,
Berger, T., Godfroy-Cooper, M., and Bachelder, E. N.,
“Full-Body Haptic Cueing Algorithms for Augmented
17
Pilot Perception in Degraded/Denied Visual Environ-
ments,” Proceedings of the 79th Annual Forum of the
Vertical Flight Society, West Palm Beach, May 15-18,
2023. DOI: 10.4050/F-0079-2023-18072
30. Morcos, M. T., Fishman, S. M., Saetti, U., Berger, T.,
Godfroy-Cooper, M., and Bachelder, E. N., “Spatial Au-
dio Cueing Algorithms for Augmented Pilot Perception
in Degraded/Denied Visual Environments,” Proceedings
of the 49th European Rotorcraft Forum, B¨
uckeburg, Ger-
many, Sep 5–7, 2023.
31. Deneve, S., and Pouget, A., “Bayesian multisen-
sory integration and cross-modal spatial links,” Jour-
nal of Physiology, Vol. 98, (1-3), 2004, pp. 249–258.
DOI: 10.1016/j.jphysparis.2004.03.011
32. Angelaki, D. E., Gu, Y., and DeAngelis, G. C.,
“Multisensory integration: psychophysics, neuro-
physiology, and computation,” Current Opinion in
Neurobiology, Vol. 19, (4), 2009, pp. 452–458.
DOI: 10.1016/j.conb.2009.06.008
33. McRuer, D. T., Clement, W. F., Thompson, P. M., and
Magdaleno, R. E., “Minimum Flying Qualities. Volume
2: Pilot Modeling for Flying Qualities Applications,”
Technical report, Systems Technology, Inc Hawthorne,
CA, 1990.
34. Klyde, D. H., Ruckel, P., Pitoniak, S. P., Schulze,
P. C., Rigsby, J., Xin, H., Brewer, R., Horn, J., Fegely,
C. E., Conway, F., Ott, C. R., Fell, W. C., Mulato, R.,
and Blanken, C. L., “Piloted Simulation Evaluation of
Tracking MTEs for the Assessment of High-Speed Han-
dling Qualities,” Proceedings of the 74th Annual Forum
of the American Helicopter Society, Phoenix, AZ, May
14-17, 2018.
35. Hess, R. A., “Simplified approach for modelling pi-
lot pursuit control behaviour in multi-loop flight con-
trol tasks,” Proceedings of the Institution of Me-
chanical Engineers, Part G: Journal of Aerospace
Engineering., Vol. 220, (2), 2006, pp. 85–102.
DOI: 10.1243/09544100JAERO33
36. Miller, J. D., Godfroy-Cooper, M., and Wenzel, E. M.,
“Using Published HRTFS with Slab3D: Metric-Based
Database Selection and Phenomena Observed,” Pro-
ceedings of the 20th International Conference on Audi-
tory Display (ICAD), New York, NY, June 22-25, 2014.
DOI: 10.13140/2.1.1379.6489
37. McRuer, D. T., and Jex, H. R., “A Review of Quasi-
Linear Pilot Models,” IEEE Transations on Human Fac-
tors in Electronics, Vol. 8, (3), 1967, pp. 231–249.
DOI: 10.1109/THFE.1967.234304
38. Bachedler, E., Berger, T., Aponso, B., and Lusardi, J.,
“Method for Predicting Multi-Axis Task Performance
and Handling Qualities Rating for a Coaxial-Compound
and Tiltrotor Rotorcraft with Validating Data,” Pro-
ceedings of the 75th Annual Forum of the Vertical
Flight Society, West Palm Beach, FL, May 16-18, 2023.
DOI: 10.4050/F-0079-2023-18069
39. Godfroy-Cooper, M., Miller, J. D., Bachelder, E. N., and
Wenzel, E. M., “Isomorphic Spatial Visual-Auditory
Displays for Operations in DVE for Obstacle Avoid-
ance,” Proceedings of the 44th European Rotorcraft
Forum, Delft, Netherlands, September 19-20, 2018.
DOI: 10.4050/F-0075-2019-14563
40. Tischler, M. B., and Rample, R. K., Aircraft and Rotor-
craft System Identification: Engineering Methods and
Flight Test Exemples, Second Edition, Princeton Uni-
versity Press, 1972.
41. Godfroy-Cooper, M., Sandor, P. M. B., Miller, J. D., and
Welch, R. B., “The interaction of vision and audition
in two-dimensional space,” Frontiers in Neuroscience,
Vol. 9, 2015. DOI: 10.3389/fnins.2015.00311
42. Bachedler, T., E., Lusardi, J., and Aponso, B., “Neu-
romuscular Response Comparison for Center and Side
Stick Positions,” Proceedings of the 78th Annual Forum
of the Vertical Flight Society, Fort Worth, TX, May 10-
12, 2022. DOI: 10.4050/F-0078-2022-17548
43. McRuer, D., Graham, D., and Reisener, W., “Human
Pilot Dynamics in Compensatory Systems,” Technical
report, AFFDL-TR-65-15, Air Force Flight Dynam-
ics Laboratory, Air Force Systems Command, Wright-
Patterson Air Force Base, OH, 1965.
44. Anon, “Aeronautical design standard performance spec-
ification handling qualities requirements for military ro-
torcraft,” Technical report, U.S. Army Aviation and Mis-
sile Command Aviation Engineering Directorate, 2000.
18