Conference PaperPDF Available

Spatial Audio Cueing Algorithms for Augmented Pilot Perception in Degraded/Denied Visual Environments

Authors:

Abstract and Figures

This paper demonstrates the development, implementation, and testing of spatial audio cueing algorithms for augmented pilot perception. Cueing algorithms are developed for roll-axis compensatory tracking tasks where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to produce a control action. The error is displayed to the pilot using three different cueing modalities: visual, audio, and combined visual and audio. For the visual and combined visual and audio modalities, visual cues are also considered in degraded visual environments (DVE). Spatial audio algorithms that are based on a proportional-derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle system (PVS) performance for the task in consideration when using audio feedback only (no visual cues) and to improve PVS performance in DVE when using combined visual and audio feedback. These results are in line with previous studies on the use of full-body haptics for augmented pilot perception. The combination of these results indicate that the use of secondary sensory cues such as full-body haptics and spatial audio to augment the pilot perception can lead to improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied.
Content may be subject to copyright.
Spatial Audio Cueing Algorithms for Augmented Pilot Perception in
Degraded/Denied Visual Environments
Michael T. Morcos
Graduate Research Assistant
Spencer M. Fishman
Graduate Research Assistant
Umberto Saetti
Assistant Professor
Department of Aerospace Engineering
University of Maryland
College Park, MD 20740
Tom Berger
Flight Controls Group Lead
U.S. Army Combat Capabilities Development
Command Aviation & Missile
Moffett Field, CA 94035
Martine Godfroy-Cooper
Senior Research Psychologist
Edward N. Bachelder
Senior Research Engineer
San Jose State University
Moffett Field, CA 94035
ABSTRACT
This paper demonstrates the development, implementation, and testing of spatial audio cueing algorithms for
augmented pilot perception. Cueing algorithms are developed for roll-axis compensatory tracking tasks where
the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to pro-
duce a control action. The error is displayed to the pilot using three different cueing modalities: visual, audio,
and combined visual and audio. For the visual and combined visual and audio modalities, visual cues are also
considered in degraded visual environments (DVE). Spatial audio algorithms that are based on a proportional-
derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle
system (PVS) performance for the task in consideration when using audio feedback only (no visual cues) and to
improve PVS performance in DVE when using combined visual and audio feedback. These results are in line
with previous studies on the use of full-body haptics for augmented pilot perception. The combination of these
results indicate that the use of secondary sensory cues such as full-body haptics and spatial audio to augment
the pilot perception can lead to improved/partially-restored PVS performance when primary sensory cues like
vision are impaired or denied.
INTRODUCTION
In symbiotic piloting, a human pilot shares perception and
control authority with an artificial intelligence (AI) agent to
control the motion of a vehicle, acting as a symbiotic organ-
ism (Fig. 1). Here, a vehicle is intended as any generic n-DOF
machine (e.g., airplanes and helicopters, drones, etc.) that
can move across regions of physical space. The vehicle may
physically host or be teleoperated by a human pilot, and its
dynamics are augmented by an AI agent with human-like per-
ception and actuation dynamics. Current perception models
for symbiotic human-machine piloting of vehicles are based
on the dominant visual (i.e., sight) and vestibular (i.e., equi-
librium) cues, but neglect less dominant perception cues, such
Presented at the 49th European Rotorcraft Forum, B¨
uckeburg, Ger-
many, September 5–7, 2023.
as somatosensory cues (e.g., haptics) or auditory cues (e.g.,
3-D audio), and interactions between primary and secondary
sensory channels. As such, there is limited understanding on
shared human-machine perception solutions for vehicle pilot-
ing when the symbiotic system operates with denied/impaired
sensory channels (e.g., degraded visual environment), de-
nied/impaired interactions between sensory channels, or with
primary sensory channels augmented with traditionally sec-
ondary perception cues.
Denial or impairment of perceptual channels may arise from
environmental conditions (e.g., weather), factors limiting ma-
chine abilities (e.g., sensor failure), or human pilot abili-
ties (e.g., temporary or permanent disabilities, e.g., fatigue).
Within the context of manned piloting of aircraft, modeling of
secondary sensory cues could help develop cueing strategies
to enable emergent pilot vehicle system (PVS) performance,
1
Fig. 1: Symbiotic pilot vehicle system.
or to supplement for pilot spatial disorientation which may be
caused by temporary or permanent loss/malfunction of their
vestibular and/or visual systems.
Within the context of secondary sensory cueing, previous
studies have shown that full-body tactile cueing of the relative
motion of a hovering helicopter with respect to a target fixed
in space is an effective strategy to increase spatial awareness
and handling qualities, and decrease pilot workload in and out
of degraded visual environment (DVE). In these studies, the
relative linear and angular positions, velocities, and accelera-
tions were cued to the pilot through an array of tactors, and
using tactor pulse patterns with varying amplitude, frequency,
and waveform (Refs. 14). Recently, full-body haptics in the
form of electrical muscular stimulation (EMS) was investi-
gated for augmented pilot perception in degraded/denied vi-
sual environments (Ref. 5). Contrary to previous studies,
these haptic cueing algorithms used are based on a continu-
ous cueing strategy rather than on a binning strategy. Similar
strategies to those of Refs. 1,2,5 could be used in the proposed
investigation and adapted through spatial audio cueing.
As such, the objectives of the present investigation are three-
fold. The first objective is to extend the crossover model (Ref.
6) to secondary sensory cueing paths like spatial audio cue-
ing. The second objective is to develop efficient spatial audio
cueing strategies that can be used in compensatory tracking
experiments (Refs. 6, 7) in place of, or in combination with,
visual cues. The third objective is to test spatial audio cue-
ing strategies and identify the corresponding PVS dynamics.
Comparisons between visual, audio, and combined visual and
audio cueing will help understand the potential differences
in pilot equalization when leveraging this secondary sensory
path to control the motion of an aircraft. Moreover, these ex-
periments will shed light on potential benefits of spatial audio
cueing.
The paper begins with a discussion of the overall methodol-
ogy adopted in this investigation, including an overview of pi-
lot modeling. This is followed by a description of the equip-
ment adopted for the experimental studies. Next, the devel-
opment and implementation of spatial audio cueing strategies
that make use of proportional-derivative compensation are de-
scribed in detail. Discussions on experiment design and para-
metric identification from input-output experimental data fol-
lows. Results feature both time- and frequency-domain anal-
yses of the experimental data. Quantitative data from the ex-
periments is compared to qualitative data from pilot feedback.
Final remarks summarize the overall findings of the study and
future developments are identified.
METHODOLOGY
Overview
The present effort focuses on closed-loop compensatory
tracking tasks in which the pilot acts on the displayed error
ebetween a desired command input iand the comparable ve-
hicle output motion mto produce a control action c. Histori-
cally, information about the error ewas presented to the pilot
through visual displays, examples of which is shown in Fig.
2 (Ref. 7). In Fig. 2a, the green bar with the inner and outer
reticles is aircraft pitch attitude indicator m, whereas the or-
ange bar represents the commanded attitude i. This kind of
display is used for pursuit tracking tasks (Ref. 8). The differ-
ence between these two is error that the pilot attempts to min-
imize within the defined desired and adequate performance
constraints. These constraints are given by the inner and outer
reticles, respectively. Figure 2b shows another kind of visual
display used for compensatory tracking tasks referred to as
bow tie display. This particular display is used to cue roll
and pitch attitude errors simultaneously. For pitch-axis eval-
uations with this display, the objective is to capture and hold
the green dot within the magenta circles for each commanded
2
pitch attitude. For roll-axis evaluations with the same display,
the objective is to capture and hold the green line within the
bow tie bounds for each commanded bank angle. The idea
behind this research is to replace and/or augment these vi-
sual displays with spatial audio displays. More specifically,
the auditory display will make use of 3-D spatialized sound
where headphones cue stereo sound making use of right and
left channels.
(a) Pitch axis pursuit tracking display.
(b) Bow tie display for roll and pitch compensatory tracking tasks.
Fig. 2: Visual displays.
Pilot Vehicle System Modeling
As the base for the prediction of PVS performance and iden-
tification of the PVS dynamics, a pilot model structure needs
to be specified. Due to the preliminary nature of the investi-
gation, a crossover model (Ref. 9) is assumed as it constitutes
a simple yet powerful way to represent the combined PVS dy-
namics. The PVS dynamics in a compensatory tracking task
is shown qualitatively in Fig. 3.
In this study, the vehicle and display dynamics are all com-
bined into the controlled element with a transfer function
Yc(s). The portion of the pilot’s control action linearly cor-
related with the input is represented by the quasi-linear de-
scribing function Yp(s), which also includes the effects of the
manipulator characteristics. In a compensatory control task,
it was shown through extensive research (see, e.g., Ref. 6)
that the human pilot adapts his or her dynamics so that near
crossover frequency ωc(i.e., where |YpYc|=1, or 0 dB) the
open-loop dynamics are given by:
YOL(s) = YpYc=ωc
sesτe(1)
where τeis the effective time delay, including transport delays
and neuromuscular lags. It is worth noting that the crossover
frequency is equivalent to the loop gain and accounts for the
pilot’s adaptive compensation for the controlled element gain.
The aircraft dynamics is assumed to have the following gen-
eral form:
Yc(s) = Ke
s(TIs+1)(2)
and is representative of the roll attitude response to the pi-
lot lateral stick. The simplest pilot describing function form
for this particular aircraft dynamics, which corresponds to the
open-loop crossover model, is:
Yp(s) = Kp(TLs+1)esτe(3)
where Kpis the pilot static gain and TLis the pilot lead time
constant. It is worth mentioning that Ref. 6 suggests the pilot
lead time to be approximately equal to the lag time constant
τIof the aircraft dynamics. This way, the pilot can adjust
his or her gain Kpto place the crossover frequency where re-
quired to complete the task. Thus, given known aircraft dy-
namics, the human pilot parameters to be identified for each
cueing modality are Kpand τe. Moreover, particular focus is
be placed on estimating and comparing ωcfor each cueing
modality.
If a sensory modality and/or vehicle configuration yields a
high crossover frequency, i.e.,ωc>4 rad/s (high perfor-
mance) the pilot’s neuromuscular mode will start to influence
the open-loop response such that the latter can become much
flatter than what the crossover model predicts. In this case,
the pilot model proposed in Ref. 10 should be used, which
accounts for the neuromuscular dynamics.
EQUIPMENT
Spatial audio cueing is achieved through a closed-ear gaming
headset (Logitech Pro-G 50 mm), shown in Fig. 4. This head-
set is equipped with a hybrid mesh PRO-G driver measuring
1.97 inches (50 mm), ensuring good sound quality across a
frequency range of 20 Hz to 20 kHz. With an impedance of
35 Ohms and a sensitivity of 91.7 dB SPL at 1 mW and 1 cm,
this headphone provides a balanced and immersive listening
experience for users.
The controller used for tracking task is a Logitech X52 Pro
joystick, shown in Fig. 5. The joystick has three degrees of
freedom: left/right, fore/aft, and twist. The joystick adopts a
base spring with constant stiffness.
AUDIO CUEING STRATEGY
A hybrid continuous-binning cueing of the roll attitude track-
ing error and/or its time derivative is found to be an effective
audio cueing strategy. The general form of this strategy is a
3
Fig. 3: Pilot vehicle system in a compensatory tracking task.
Fig. 4: Logitech Pro Gaming Headset
(a) Front. (b) Side.
Fig. 5: Logitech X52 Pro joystick.
proportional-derivative (PD) compensation on the roll attitude
tracking error:
M=KPeφ+KD˙eφ(4)
where Mis a parameter representing the displayed error and
eφis the roll attitude tracking error. The attitude tracking er-
ror is defined as the difference between the commanded and
displayed roll angles eφ=φcmd φdis. Within the context of
Fig. 3, the displayed error ecorresponds to eφ, the desired
command input iis φcmd and the vehicle output motion mis
represented by φ. Note that in this compensatory tracking task
φcmd =0, such that effectively eφ=φdis. The displayed bank
angle is given by the difference between the actual bank at-
titude and some disturbance, such that φdis =φφSOS. The
disturbance is given by a sum of sines (SOS) signal and will
be discussed later in the paper. Additionally, KPand KDare
the proportional and derivative gains, respectively. It is worth
noting that the audio gain is adjusted for each subject based on
their perception of sound. The maximum gain corresponds to
the maximum stimulus that is perceived by the pilot as not an-
noying and harmful to the ear. The proportional gain is tuned
based on the error bounds for Desired and Adequate perfor-
mance (discussed later in the paper). The proportional gain
KPis chosen to give M=50% at the Desired error thresh-
old of 5 deg and M=100% at the Adequate error tolerance
threshold of 10 deg. This implies KP=10%/deg and is found
by:
KP=(SRmax SRmin)
eφAde eφDes
(5)
where SRmax =100% and SRmin =50%. This way, the dis-
played error Mwill reach 100% if for eφ=eφAde . The deriva-
tive gain is tuned by trial and error. An integral gain was not
found to yield any particular benefits to the cueing strategy
and was therefore discarded. The azimuth location and sound
wave frequency to be generated depend, respectively, on the
sign and the magnitude of M. When M>0, a sound signal
is played at the right ear whereas for M<0 a sound signal is
played at the left ear. The parameter Mis then used to cal-
culate the frequency of the sound generated by the OpenAL
library. The sample rate is 44100 Hz and the pulse duration is
set to 0.03 sec. Based on this strategy, the pilot will perceive a
sound in the direction of the error or, equivalently, in the verse
of the control action to be taken to reduce the error. The sound
signal amplitude is given by:
A=sin(2πωt)(6)
where ωis the sound signal frequency. The sound frequency
is chosen via a binning strategy consisting of five frequency
bins. These bins are chosen so as to make the jump between
each bin easily perceivable by the pilot. The relation between
the displayed frequency and Mis exponential for the three
middle levels and constant for the lower and upper levels as
described in Table 1. Moreover, sound signals are cued in
pulses for low and mid values of Mwith a fixed pulse width
and a linearly varying inter-pulse interval. For higher values
of M, the inter-pulse interval is zero (i.e., no time gap be-
tween pulses). As such, the refresh rate of the audio signal is
the summation of the pulse width and the inter-pulse interval,
resulting in an adaptive proportional relation with respect to
the value of the error and error rate.
A bow tie display for compensatory tracking task is used in
this research. Three visual indicators are used as symbology
and are shown in Fig. 6. The green line corresponds to the
displayed bank angle. The orange line represents the desired
bank angle. The magenta lines indicate the boundaries for
Desired and Adequate performance. More specifically, the
inner portion boundary of the magenta indicator corresponds
4
to the Desired performance boundary, whereas the outer por-
tion corresponds to the Adequate performance threshold. The
displayed bank attitude is the difference between the actual
roll attitude and the disturbance from the sum-of-sines (SOS)
forcing function described in the next section:
φdis =φφSOS (7)
Note that the command indicator value corresponds to the
fixed horizon reference.
Fig. 6: Bow tie display
Consider the case shown in Fig. 6, where the displayed bank
angle is φdis =5 deg. Also, assume zero roll rate error, i.e.,
˙eφ=0. Then, a proportional gain KP=9 yields M=45%.
According to the binning strategy, this results in a Level 3 fre-
quency range, with ω=1231.14 Hz and an inter-pulse inter-
val equals to 0.11 sec. Since M<0, the sound signal is cued
to the pilot in their left ear (i.e., with an azimuth ψ=90
deg). This prompts the pilot to make a left stick correction to
compensate for the error and bring the aircraft back to level.
This cueing strategy is summarized in Table 1. This audio
cueing strategy build on the work in Ref. 11.
EXPERIMENT DESIGN
The set of experiments object of this study involve a pre-
cision, non-aggressive, closed-loop compensatory tracking
task in the roll axis only where the forcing function (distur-
bance/command input) is chosen to be a sum of signs (SOS),
as per Ref. 7. The SOS forcing function is used to drive the
compensatory tracking task through which the pilot attempts
to minimize the displayed error within desired performance
constraints. A Fibonacci series-based SOS input is designed
to emphasize the frequency range that encompasses key vehi-
cle dynamics and typical human pilot control action, ranging
from 0.3 to approximately 6 rad/s. The length for scoring time
is 60 seconds, preceded by a 10 sec ramp up and followed by 5
sec ramp down period. Five trials are conducted for each cue-
ing modality. Each of these five trials uses a different SOS sig-
nal, where the sum of sines are based on randomly-generated
phasing angles and stored offline. The five SOS time histories
are the same across each cueing modality and are presented to
the test subject in the same order. Experiments make use of
four different subjects, one of which is a test pilot. Each sub-
ject performed one practice run before recording the five dif-
ferent patterns for each modality tested. Desired performance
is defined such that the tracking error on the roll attitude be
less than 5 deg for at least 50% of the scoring time. Ade-
quate performance requires the roll attitude tracking error to
be less than 10 deg for at least 75% of the scoring time. These
requirements are summarized in Table 2.
The joystick is positioned between the legs of the test subject,
whom are seated with their eyes at approximately 1 m from
the computer screen. The central positioning of the joystick is
justified by the effort in replicating the use of a cyclic stick in
a conventional helicopter. By the same rationale, the vertical
position of the joystick is adjusted to have the test subjects rest
their elbow on their thigh when operating the joystick. This
setup is shown in Fig. 7.
Fig. 7: Experimental setup.
The following five cueing modalities are tested:
1. Visual: The pilot performs the compensatory tracking
task by looking at the visual display only, where the vi-
sual display is the bow tie display shown in Fig. 6.
2. Visual DVE: Here, visual cues are disturbed by the use
of foggles to simulate DVE.
3. Visual + Audio: Information about the tracking error is
provided to the pilot via the visual (bow tie) display and
via spatial audio cueing. The cueing adopts proportional-
derivative compensation.
4. Visual DVE + Audio: This is similar to the cueing
modality above, but visual cues are disturbed by the use
of foggles.
5. Audio: Information about the tracking error is provided
to the pilot via spatial audio cueing only while the pi-
lot is blindfolded (Fig. 7). Cueing adopts proportional-
derivative compensation.
This test matrix is summarized in Table 3. It is worth not-
ing that derivative-only compensation was discarded a priori
as test subjects incurred in pilot-induced oscillations (PIOs).
5
Table 1: Audio cueing strategy.
Threshold Frequency [Hz] Condition on Pulse-Width [sec] Inter-Pulse Interval [sec] Condition on
Frequency [%] Inter-Pulse Interval [%]
Level 1 110 0 <|M|<5 0.03 0.002|M|+0.2 0 <|M|<65
Level 2 220 ·2|M|
50 5|M|<30 0.03 0.002|M|+0.2 0 <|M|<65
Level 3 1000 ·2|M|−30
50 30 |M|<100 0.03 0 |M| 65
Level 4 2880 ·2|M|−100
50 100 |M|<150 0.03 0 |M| 65
Level 5 6880 |M| 150 0.03 0 |M| 65
Table 2: Desired and Adequate performance metrics.
Performance Tracking Error Bounds [deg] Scoring Time [%]
Desired |eφ|<550
Adequate |eφ|<10 75
Additionally, proportional-only compensation was also dis-
carded a priori in light of the results of Ref. 5. In this
study, PD compensation was shown to yield higher perfor-
mance than P-only compensation. The aircraft dynamics is
chosen to be representative of the roll dynamics of a con-
ventional utility helicopter similar to a UH-60 with Level 1
handling qualities. The specific transfer function used in this
study is:
Yc(s) = φ
δlat
(s) = Lδlat
s(sLp)(8)
where Lp=3.5 1/s is the roll acceleration due to the roll
rate and Lδlat =0.147 1/(s2-%) is the roll acceleration due to
a lateral stick displacement. The SOS code and visual cueing
interface of Ref. 7 are developed in C/C++, along with inter-
face between these two, the spatial audio OpenAL library, and
the aircraft dynamics. The aircraft dynamics is implemented
in MATLAB®/Simulink and subsequently compiled to C/C++
code.
PARAMETRIC IDENTIFICATION
Parametric identification studies make use of the CIFER®
(Ref. 12) software tool. The identification procedure is based
on a two-step process. First, frequency responses of the air-
craft output to the tracking error are extracted from piloted
simulation data. Next, state-space models are identified from
the frequency response data.
Pilot Vehicle System Dynamics
Consider the PVS dynamics of Eq. (1):
YOL(s) = φ
eφ
(s) = ωc
sesτe(9)
To be suitable for state-space parametric identification, these
dynamics are transformed into state-space form:
˙
φ=0φ+ωceφ(tτe)(10)
As such, the parameters to be identified from input-output
data are ωcand τe.
Pilot Dynamics
Consider now expressing the PVS dynamics of Eq. (1) in
terms of the pilot and aircraft dynamics parameters:
YOL(s) = φ
eφ
(s) = YpYc=LδlatKp(TLs+1)
Lps(TIs+1)esτe(11)
Lδlat Kp
Lpsesτe(12)
since TLis approximately equal to TL(Ref. 6). It is worth
noting that ωc=Lδlat Kp
Lp
, such that the pilot gain Kpcan be
identified based on the crossover frequency ωcidentified with
the model of Eq. (10).
RESULTS
Pilot Vehicle System Performance
After tuning, the PD gains used for the set of experiments
object of this analysis are KP=9 %/deg and KD=4 %/(deg-
s). The proportional gain stems from Eq. (5) whereas the
derivative gain was found via trial and error. Figure 8 shows
an example time history for a non-aggressive compensatory
6
Table 3: Test matrix.
Experiment # Cueing Modality KP[%/deg] KD[%-s/deg]
1 Visual - -
2 Visual DVE - -
3 Visual + Audio PD 9 4
4 Visual DVE + Audio PD 9 4
5 Audio PD 9 4
tracking task with an audio-only cueing modality and a PD
compensation. Figure 8a shows the time history of the roll
attitude and roll rate for the same task. Figure 8b shows the
time history of the roll rate error and its time derivative.
0 10 20 30 40 50 60
-20
0
20
Input Actual
0 10 20 30 40 50 60
-20
0
20
0 10 20 30 40 50 60
-20
0
20
(a) Roll attitude and roll rate.
0 10 20 30 40 50 60
-10
0
10
Desired Actual
0 10 20 30 40 50 60
-20
0
20
0 10 20 30 40 50 60
-20
0
20
(b) Roll error and error rate.
Fig. 8: Example time history for a non-aggressive
compensatory tracking task with audio-only cueing and PD
compensation.
The Desired and Adequate performance success rate of each
test subject and for each cueing modality are shown in Fig.
9. Desired performance success rates, shown in Fig. 9a, indi-
cate that audio-only cueing adopting PD compensation (light
orange boxes) provides satisfactory performance for three out
of four test subjects, where the mean performance of test sub-
ject not meeting the requirements is still close to the satis-
factory threshold. This constitutes a significant result in that
it suggests that the task in consideration can be flown with-
out the primary sense of vision, but rather with a secondary
perceptual channel like spatial audio. Foggles appear to be
an effective strategy to simulate DVE in that desired perfor-
mance in DVE when using visual-only cueing is significantly
degraded (light green and green boxes). Combined visual and
audio cueing making use of PD compensation (pink and red
boxes) shows a slight increase in performance across all pi-
lots. However, the most accentuated increase in performance
is when audio cueing is paired with visual cueing in DVE (red
boxes). It is apparent how the performance lost from degraded
vision is partially recovered through the use of audio cueing,
particularly when using PD compensation. Similar observa-
tions can be made for Adequate performance success rates in
Fig. 9b.
Figure 10 shows the root-mean-square (RMS) of the roll at-
titude tracking error. These results further substantiate what
is observed in the Desired and Adequate success rate plots.
These results suggest that the adoption of spatial audio cueing
might be especially useful when visual perception is degraded
or denied. On the other hand, pairing audio cues with vision
when the latter is not degraded appears to yield only modest
improvements.
Identified Pilot Vehicle System Dynamics
While time-domain Desired and Adequate metrics for task
performance might be representative of the performance of
the spatial audio cueing strategies developed herein, it is im-
portant to also perform a frequency-domain analysis to ob-
tain a better understanding of these spatial audio cueing laws.
As such, frequency-domain identification is performed both
on the open-loop pilot vehicle and on the pilot dynamics.
The frequency range used for the identification process is
0.5ωc4 rad/s for non-DVE visual-only cueing tasks,
0.5ωc3 rad/s for DVE visual and combined visual and
audio cueing tasks, and 0.5ωc2 rad/s for audio-only
cueing tasks. The average cost functions associated with the
identification of each cueing modality for each subject were
always less than J=100 and typically less than J=50. An
average cost function across all cost functions for each fre-
quency response of J100 generally reflects an acceptable
7
(a) Desired performance.
(b) Adequate performance.
Fig. 9: Desired and Adequate performance success rates.
Fig. 10: Root mean square (RMS) of tracking error.
level of accuracy for flight dynamics modeling, whereas a cost
function of J50 can be expected to produce a model that
is nearly indistinguishable from the original both in the fre-
quency and time domains. However, some of the individual
cost functions can reach J200 without resulting in a notice-
able loss of overall predictive accuracy (Ref. 12).
Figure 11 shows the identified crossover frequency and ef-
fective time delay of the open-loop PVS dynamics. This
figure shows that audio-only cueing adopting a proportional-
derivative compensation strategy (light orange squares) is as-
sociated with high time delays and low crossover frequencies.
Visual-only feedback (light green circles) shows low time de-
lays and high crossover frequencies whereas visual-only feed-
back in DVE (green pluses) shows comparatively higher time
delays and lower crossover frequencies. Combined audio
PD and visual cueing in non-DVE conditions (pink asterisks)
shows slightly higher crossover frequencies than visual-only
cueing. The most significant gains are observed for combined
visual and audio cueing in DVE conditions (red crosses) when
compared to visual-only cueing in DVE, where time delays
are lower whereas crossover frequencies are higher.
1 1.5 2 2.5 3 3.5 4
Crossover Frequency, c [rad/s]
0.2
0.3
0.4
0.5
0.6
0.7
Effective Time Delay, e [s]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Fig. 11: Crossover frequency and effective time delay of the
PVS.
Figure 12 shows the RMS of the identified crossover fre-
quency and effective time delay across all subjects. By look-
ing at Figure 12a one can see the increase in the crossover
frequency by the help of audio cueing specially in DVE case.
Same analogy can be made to understand the improvements
in effective time delay as demonstrated in figure 12b
Figure 13 shows the identified pilot gain and transport delay
of the pilot dynamics. These results are in line with those in
Fig. 11 as the pilot gains for the pilot and aircraft models
in consideration is Kp=ωcLp
Lδlat
. This result is obtained by
equating Eq. (1) with the product of Eqs. (2) and (3).
These results are in agreement with the Desired and Ade-
quate success rates, in that audio cueing is particularly useful
in restoring lost perception when paired with degraded visual
cueing.
Frequency-Domain Analysis
This section focuses on the frequency responses identified
from input-output data. The frequency responses shown are
8
1
2
3
4
RMS( c) [rad/s]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
(a) RMS of Crossover frequency
0.2
0.3
0.4
0.5
0.6
0.7
RMS( e) [s]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
(b) RMS of effective time delay
Fig. 12: RMS of Crossover frequency and effective time
delay of the PVS.
30 40 50 60 70 80
Pilot gain, Kp [%/deg]
0.2
0.3
0.4
0.5
0.6
0.7
Effective time delay, e [s]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Fig. 13: Pilot gain and transport delay of the PVS.
those for the open-loop pilot vehicle dynamics of Eq. (1), i.e.,
for φ
eφ
(s). These frequency responses are averaged across all
pilots to be representative of the mean behavior of all test sub-
jects.
Figure 14 shows the open-loop PVS response for visual-only
cueing. Visual cueing in DVE is associated with a lower gain
and crossover frequency across the frequency range of interest
compared to non-DVE visual cueing. This is indicative of the
fact that foggles are an effective way to decrease PVS perfor-
mance due to the degraded visual perception of the pilot.
100
0
5
10
15
Mag [dB]
Visual Visual DVE
100
-180
-160
-140
-120
-100
Phase [deg]
100
Frequency, [rad/s]
0
0.5
1
Coh
Fig. 14: Averaged frequency responses of the open-loop PVS
dynamics for visual-only cueing.
Figure 15 shows the open-loop PVS response for audio-only
cueing (PD compensation), non-DVE visual-only cueing, and
for combined visual and audio cueing. Audio-only cueing is
associated with a low gain across all frequencies of interest
when compared to the other strategies, along with a higher
phase lag at low frequency. Additionally, spatial audio cueing
shows a higher phase lag at frequencies approximately higher
than 1 rad/s compared to the other cueing modalities. Com-
bined visual and audio cueing does not seem to significantly
affect the gain or phase lag at any frequency. These results
suggest that augmenting visual cueing with audio yields no
significant improvements when compared to visual-only feed-
back.
Figure 16 shows the open-loop PVS response for audio-only
cueing (PD compensation), non-DVE and DVE visual-only
cueing, and for combined visual and audio cueing in DVE.
Non-DVE visual-only cueing is associated with the highest
gain among all cueing modalities shown whereas audio-only
cueing with PD compensation exhibits the lowest gain. No-
tably, the gain loss stemming from DVE is partially recovered
through the use of audio cueing with PD compensation. This
suggests that PVS performance lost because of DVE can be
partially restored by leveraging secondary sensory cues. In
summary, combined visual and spatial audio in DVE yields
higher gain than with visual feedback alone. This result fur-
ther substantiates the results obtained throughout the paper,
which indicate that audio feedback may be particularly help-
ful when visual cues are degraded or denied.
Figure 17 shows the same responses as Fig. 16 but for test
9
100
0
5
10
15
Mag [dB]
100
-180
-160
-140
-120
-100
Phase [deg]
100
Frequency, [rad/s]
0
0.5
1
Coh
Audio PD
Visual
Visual + Audio PD
Fig. 15: Averaged frequency responses of the open-loop PVS
dynamics for audio-only cueing (PD compensation),
non-DVE visual-only cueing, and for combined visual and
audio cueing.
100
0
5
10
15
Mag [dB]
100
-180
-160
-140
-120
-100
Phase [deg]
100
Frequency, [rad/s]
0
0.5
1
Coh
Audio PD
Visual
Visual DVE
Visual DVE + Audio PD
Fig. 16: Averaged frequency responses of the open-loop PVS
dynamics for audio-only cueing (PD compensation),
non-DVE and DVE visual-only cueing, and for combined
visual and audio cueing in DVE.
subject 3 only. According to the results in Figs. 9a and 9b,
test subject 3 is the best interpreter of audio-only feedback
among all test subjects. Their frequency response is shown
to demonstrate that audio-only cueing can provide a compa-
rable response to visual-cueing-only feedback, although with
a higher phase lag. Moreover, for this particular test subject,
the performance lost for visual-only cueing in DVE is almost
entirely recuperated via PD audio cueing.
100
0
5
10
15
Mag [dB]
100
-180
-160
-140
-120
-100
Phase [deg]
100
Frequency, [rad/s]
0
0.5
1
Coh [ND]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Fig. 17: Test subject 3 frequency responses of the open-loop
PVS dynamics for audio-only cueing (PD compensation),
non-DVE and DVE visual-only cueing, and for combined
visual and audio cueing in DVE.
Identified Crossover Model at Discrete Frequencies
The parameters of the open-loop PVS crossover model of
Eq. (1) were identified from the open-loop PVS frequency
responses (e.g., Figs. 14 through 17) using only the sum-of-
sines frequency where the forcing energy is. Figures 18 and
19 show the crossover frequency and equivalent delay, respec-
tively. As expected based on the performance for the differ-
ent cueing methods, audio-only cueing with PD compensation
has the lowest overall crossover frequency and highest equiv-
alent delay. The highest crossover frequency and lowest delay
are seen for combined visual cueing non-DVE and audio PD,
which also showed the best overall task performance.
1
2
3
4
c [rad/s]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Pilot 1
Pilot 2
Pilot 3
Pilot 4
Fig. 18: Identified open-loop PVS crossover frequency.
The PVS stability margins were calculated from the identi-
10
0.2
0.3
0.4
0.5
0.6
0.7
e [s]
Visual
Visual DVE
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Pilot 1
Pilot 2
Pilot 3
Pilot 4
Fig. 19: Identified open-loop PVS equivalent time delay.
fied open-loop PVS crossover model. Figures 20 and 21 show
the gain and phase margins of the open-loop PVS system for
the different cueing methods, respectively. The mean stability
margins are similar for all cases (Gm 5 dB and Pm35),
which are typical numbers for PVS (Ref. 13), and similar
to the assumed values used in the ADS-33 (Ref. 14) piloted
bandwidth requirements (6 dB and 45 deg). It is clear that the
subjects adjust their gain (PVS crossover frequency) based on
the equivalent loop delay such that they operate with similar
stability margins for all cases. Increasing their gain further
to try to achieve better performance will result in a reduction
in stability margins, a more oscillatory closed-loop responses,
and ultimately worse performance.
0
5
10
15
GM [dB]
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Visual
Visual DVE
Pilot 1
Pilot 2
Pilot 3
Pilot 4
Mean
Fig. 20: Identified open-loop PVS gain margin
Test Pilot Qualitative Feedback
This section reports the feedback from the test subject who is
a test pilot. The same test pilot provided comments on the use
of full-body haptic feedback in Ref. 5). The test pilot added
the below comments about the use of 3-D audio cueing:
0
20
40
60
80
PM [deg]
Visual + Audio PD
Visual DVE + Audio PD
Audio PD
Visual
Visual DVE
Pilot 1
Pilot 2
Pilot 3
Pilot 4
Mean
Fig. 21: Identified open-loop PVS phase margin.
1. The sounds were slightly confusing at the beginning but
after a while it became understandable.
2. The frequency levels were distinguishable against the at-
titude error and error rate. Also, the decreasing in the
inter-pulse interval was noticeable as the error increase
rapidly. The high pitch was very helpful when exceeding
the adequate error tolerance as an alert after a warning.
3. The 3-D audio were very helpful in degraded vision with
foggles to know the initial direction of the error and its
rate. With spatial audio one can immediately move the
stick based on what direction and frequency are heard
when the visuals are not clear enough to trust.
4. Sometimes the direction switched very quickly between
left and right and accompanied with high frequency giv-
ing the impression of doing bad tracking but in fact that
is not necessarily correct. That is because the rate feed-
back can change very fast according to the sum of sines
signal.
5. The sound occurs at zero or low errors (when the indi-
cator is at or close to the reference) is slowly changing
and the bass (low frequency) is not motivating enough to
move the stick and the pilot may wait and put the focus
on larger error signals. This harms the desired success
rate. However, if the sound was quicker and higher, the
pilot may overshoot than the needed stick action.
CONCLUSIONS
Spatial audio cueing algorithms were developed, imple-
mented, and tested for augmented pilot perception. These au-
dio cueing algorithms are based on 3-D audio obtained via
a commercial, off-the-shelf closed-ear headset. Cueing al-
gorithms were developed for roll-axis compensatory tracking
tasks where the pilot acts on the displayed error between a de-
sired input and the comparable vehicle output motion to pro-
duce a control action. The tracking error was displayed to the
pilot using three different cueing modalities: visual, audio,
11
and combined visual and audio. For the visual and combined
visual and audio modalities, visual cues were also considered
in degraded visual environments (DVE). Experiments that in-
volved four test subjects, one of which was a test pilot, were
conducted to gather quantitative data and qualitative feedback
for analyzing the performance of the audio cueing algorithms.
Time- and frequency-domain analyses on the test data were
performed.
Based on this work, spatial audio cueing algorithms that are
based on a proportional-derivative (PD) compensation strat-
egy on the tracking error were found to provide satisfactory
PVS performance for the task in consideration when using au-
dio feedback only (no visual cues) and to improve PVS per-
formance, especially in degraded visual environment (DVE)
when using combined visual and audio feedback. These re-
sults indicate that the use of secondary sensory cues such as
3-D sounds to augment the vehicle perception can lead to
improved/partially-restored PVS performance when primary
sensory cues like vision are impaired or denied. These results
are largely in agreement with previous studies involving full-
body haptic cueing (Ref. 5).
Future work will focus on extending the haptic and audio cue-
ing algorithms to aggressive and non-aggressive tasks with the
varying dynamic characteristics of the aircraft (e.g., response
representative of Level 2 and Level 3 handling qualities), to
other axes singularly (e.g., pitch), and to multiple axes track-
ing tasks (e.g., roll and pitch) with and without cross-coupling
between the axes. Additionally, stabilization tasks for the roll
axis only, pitch axis only, and combined roll and pitch axes
will also be considered. Visual, full-body haptic, and spatial
audio cueing will be combined to assess the interactional dy-
namics of these three sensory paths. The pilot workload will
be estimated for each cueing modality using the approach in
Ref. 10. All of these measures will be compared to address
potential differences in pilot equalization and workload as a
function of cueing strategy.
ACKNOWLEDGMENTS
This research was partially funded by the U.S. Government
under agreements no. N000142312067. The views and con-
clusions contained in this document are those of the authors
and should not be interpreted as representing the official poli-
cies, either expressed or implied, of the U.S. Combat Capa-
bilities Development Command Aviation & Missile Center or
the U.S. Government.
REFERENCES
1McGrath, B. J., “Tactile Instrument for Aviation, Techni-
cal report, NAMRL Monograph 49, Naval Aerospace Medical
Research Laboratory, Pensacola, FL, 2000.
2McGrath, B. J., Estrada, A., Braithwaite, M. G., Raj, A. K.,
and Rupert, A. H., “Tactile Situation Awareness System Flight
Demonstration,” Technical report, Army Aeromedical Re-
search Laboratory, Fort Rucker, AL, 2004.
3Wolf, F. and Kuber, R., “Developing a head-mounted tac-
tile prototype to support situational awareness, International
Journal of Human-Computer Studies, Vol. 109, 2018, pp. 54–
67.
doi: 10.1016/j.ijhcs.2017.08.002
4Jennings, S., Cheung, B., Rupert, A., Schultz, K., and
Craig, G., “Flight-Test of a Tactile Situational Awareness Sys-
tem in a Land-based Deck Landing Task,” Proceedings of the
Human Factors and Ergonomics Society 48th Annual Meet-
ing, September 20-24, 2004.
doi: 10.1177/154193120404800131
5Morcos, M. T., Fishman, S. M., Cocco, A., Saetti, U.,
Berger, T., Godfroy-Cooper, M., and Bachelder, E. N., “Full-
Body Haptic Cueing Algorithms for Augmented Pilot Percep-
tion in Degraded/Denied Visual Environments, Proceedings
of the 79th Annual Forum of the Vertical Flight Society, West
Palm Beach, May 15-18, 2023.
doi: 10.4050/F-0079-2023-18072
6McRuer, D. T., Clement, W. F., Thompson, P. M., and
Magdaleno, R. E., “Minimum Flying Qualities. Volume 2: Pi-
lot Modeling for Flying Qualities Applications,” Technical re-
port, Systems Technology, Inc Hawthorne, CA, 1990.
7Klyde, D. H., Ruckel, P., Pitoniak, S. P., Schulze, P. C.,
Rigsby, J., Xin, H., Brewer, R., Horn, J., Fegely, C. E., Con-
way, F., Ott, C. R., Fell, W. C., Mulato, R., and Blanken, C. L.,
“Piloted Simulation Evaluation of Tracking MTEs for the As-
sessment of High-Speed Handling Qualities,” Proceedings of
the 74th Annual Forum of the American Helicopter Society,
May 14-17, 2018.
8Hess, R. A., “Simplified approach for modelling pilot pur-
suit control behaviour in multi-loop flight control tasks, Pro-
ceedings of the Institution of Mechanical Engineers, Part
G: Journal of Aerospace Engineering., Vol. 220, (2), 2006,
pp. 85–102.
doi: 10.1243/09544100JAERO33
9McRuer, D. T. and Jex, H. R., A Review of Quasi-Linear
Pilot Models,” IEEE Transations on Human Factors in Elec-
tronics, Vol. 8, (3), 1967, pp. 231–249.
doi: 10.1109/THFE.1967.234304
10Bachelder, E. N. and Godfroy-Cooper, M., “Pilot Work-
load Estimation: Synthesis of Spectral Requirements Analysis
and Weber’s Law,” AIAA 2014-1467, AIAA SciTech Forum,
January 7-11, 2019.
doi: 10.2514/6.2019-1228
11Godfroy-Cooper, M., Miller, J. D., Bachelder, E. N., and
Wenzel, E. M., “Isomorphic Spatial Visual-Auditory Displays
for Operations in DVE for Obstacle Avoidance,” Proceedings
of the 44th Annual Forum of the European Rotorcraft Forum,
Delft, Netherlands, September 19-20, 2018.
doi: 10.4050/F-0075-2019-14563
12
12Tischler, M. B. and Rample, R. K., Aircraft and Rotor-
craft System Identification: Engineering Methods and Flight
Test Exemples, Second Edition, Princeton University Press,
Princeton, NJ, 1972.
13McRuer, D., Graham, D., and Reisener, W., “Human Pi-
lot Dynamics in Compensatory Systems,” Technical report,
AFFDL-TR-65-15, Air Force Flight Dynamics Laboratory,
Air Force Systems Command, Wright-Patterson Air Force
Base, OH, Redstone Arsenal, AL, 1965.
14Anon, Aeronautical design standard performance spec-
ification handling qualities requirements for military rotor-
craft,” Technical report, U.S. Army Aviation and Missile
Command Aviation Engineering Directorate, Redstone Arse-
nal, AL, 2000.
13
... This approach provides discrete tactile signals to the operator via a finite set of actionable categories. In contrast, the continuous cueing strategy proposed by Morcos et al. employs a smooth, continuous mapping between tracking error and haptic feedback [34,35]. This approach provides operators with a seamless and proportional-derivative tactile response, allowing for finegrained control and continuous situational awareness. ...
... This approach provides operators with a seamless and proportional-derivative tactile response, allowing for finegrained control and continuous situational awareness. In a subsequent study, spatial audio cueing algorithms were also investigated for augmented pilot perception [35]. The findings of these two studies were consistent in that the use of both full-body haptics and spatial audio cueing led to improved/partially restored PVS performance when primary sensory cues like vision are impaired or denied. ...
... The current paper builds upon [34,35] to answer the following research questions: 1) whether pilot models traditionally used for primary sensory cues like vision and equilibrium can be extended to secondary sensory cues like haptics and audio and 2) if the use of traditionally secondary sensory cues can increase PVS performance in and out of DVE. The specific objective of this paper involves 1) extending the crossover model [38] to full-body haptic and spatial audio cueing, 2) identifying the human pilot and PVS dynamics for these secondary sensory channels, and 3) comparing the combined haptic and audio cueing modality to individual haptic and audio cueing modalities and assessing any potential multimodal performance enhancement. ...
Article
This paper illustrates the development, implementation, and testing of full-body haptic and spatial audio cueing algorithms for augmented pilot perception, where haptic cueing is based on localized electrical muscle stimulation. Cueing algorithms are developed for roll-axis compensatory tracking tasks where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to produce a control action. The error is displayed to the pilot using multiple cueing modalities: visual, haptic, audio, and combinations of these. For the visual and combined visual haptic/audio modalities, visual cues are also considered in degraded visual environments (DVEs). Full-body haptic and spatial audio algorithms that are based on a proportional-derivative compensation strategy on the tracking error are found to provide satisfactory pilot–vehicle system (PVS) performance for the task in consideration in absence of visual cueing and to improve PVS performance in DVE when used in combination with visual feedback. These results are consistent with previous studies on the use of secondary perceptual cues for augmentation of human perception. The combination of these results indicate that the use of secondary sensory cues such as full-body haptics and spatial audio to augment the pilot perception can lead to improved and partially restored PVS performance when primary sensory cues like vision are impaired or denied.
... This approach provides discrete tactile signals to the operator via a finite set of actionable categories. In contrast, the continuous cueing strategy proposed by Morcos et al. employs a smooth, continuous mapping between tracking error and haptic feedback [34,35]. This approach provides operators with a seamless and proportional-derivative tactile response, allowing for finegrained control and continuous situational awareness. ...
... This approach provides operators with a seamless and proportional-derivative tactile response, allowing for finegrained control and continuous situational awareness. In a subsequent study, spatial audio cueing algorithms were also investigated for augmented pilot perception [35]. The findings of these two studies were consistent in that the use of both full-body haptics and spatial audio cueing led to improved/partially restored PVS performance when primary sensory cues like vision are impaired or denied. ...
... The current paper builds upon [34,35] to answer the following research questions: 1) whether pilot models traditionally used for primary sensory cues like vision and equilibrium can be extended to secondary sensory cues like haptics and audio and 2) if the use of traditionally secondary sensory cues can increase PVS performance in and out of DVE. The specific objective of this paper involves 1) extending the crossover model [38] to full-body haptic and spatial audio cueing, 2) identifying the human pilot and PVS dynamics for these secondary sensory channels, and 3) comparing the combined haptic and audio cueing modality to individual haptic and audio cueing modalities and assessing any potential multimodal performance enhancement. ...
Article
This article illustrates the development, implementation, and testing of full-body haptic and spatial audio cueing algorithms for augmented pilot perception, where haptic cueing is based on localized electrical muscle stimulation (EMS). Cueing algorithms are developed for roll-axis compensatory tracking tasks where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to produce a control action. The error is displayed to the pilot using multiple cueing modalities: visual, haptic, audio, and combinations of these. For the visual and combined visual haptic/audio modalities, visual cues are also considered in degraded visual environments (DVE). Full-body haptic and spatial audio algorithms that are based on a proportional-derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle system (PVS) performance for the task in consideration in absence of visual cueing, and to improve PVS performance in DVE when used in combination with visual feedback. These results are consistent with previous studies on the use of secondary perceptual cues for augmentation of human perception. The combination of these results indicate that the use of secondary sensory cues such as full-body haptics and spatial audio to augment the pilot perception can lead to improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied.
... In a subsequent study, spatial audio cueing algorithms were also investigated for augmented pilot perception (Ref. 30). The findings of these two studies were consistent in that the use of both full-body haptics and spatial audio cueing led to improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied. ...
... The current paper summarizes and builds upon (Refs. 29,30), which investigated haptic and spatial audio cueing separately, to investigate the use of combined haptic and spatial audio cueing for augmented pilot perception. Specific objective of this paper involve: (i) extending the crossover model (Ref. ...
... Haptic and audio cueing strategies are based on those of (Refs. 29,30). The general form of these strategies is a proportional-derivative (PD) compensation on the roll attitude tracking error: ...
Conference Paper
Full-text available
This paper illustrates the development, implementation, and testing of full-body haptic and spatial audio cueing algorithms for augmented pilot perception. Cueing algorithms are developed for roll-axis compensatory tracking tasks where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to produce a control action. The error is displayed to the pilot using multiple cueing modalities: visual, haptic, audio, and combinations of these. For the visual and combined visual haptic/audio modalities, visual cues are also considered in degraded visual environments (DVE). Full-body haptic and spatial audio algorithms that are based on a proportional-derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle system (PVS) performance for the task in consideration in absence of visual cueing, and to improve PVS performance in DVE when used in combination with visual feedback. These results are consistent with previous studies on the use of secondary perceptual cues for augmentation of human perception. The combination of these results indicate that the use of secondary sensory cues such as full-body haptics and spatial audio to augment the pilot perception can lead to improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied.
... In parallel, spatial audio cueing algorithms have been explored, leading to consistent findings that suggest improved or partially restored Pilot-Vehicle System (PVS) performance when primary sensory cues, such as vision, are impaired or unavailable (Ref. 41). These advancements underscore the potential of multimodal cueing to enhance pilot perception and performance, particularly in challenging operational environments where conventional sensory modalities may be compromised. ...
Conference Paper
Full-text available
This paper investigates the use of multi-modal cueing through full-body haptic feedback to enhance pilot-vehicle system (PVS) performance, reduce mental workload (MWL), and increase situational awareness (SA) in both good and degraded visual environments (GVE/DVE). Piloted simulations were conducted using an H-60-like flight dynamics model in a virtual reality (VR) motion-based simulator, evaluating two ADS-33-like mission task elements (MTEs)-precision hover and slalom-under visual-only and combined visual and haptic feedback conditions in both GVE and DVE. The H-60 flight dynamics were augmented with a dynamic inversion (DI)-based stability augmentation system (SAS), implementing rate-command/attitude hold (RCAH) response type on the roll, pitch, and yaw axes and altitude hold response type on the vertical axis. The SAS was designed to achieve Level 1 handling qualities per ADS-33 standards. The full-body haptic cueing strategy leveraged an outer-loop DI control law, which provided vibrotactile feedback to cue desired roll, pitch, and yaw attitudes to the pilot. Roll cues were delivered via tactors mounted on the upper arms, pitch cues via tactors on the chest and back, and yaw cues via tactors on the calves. Eight test subjects participated in the piloted simulations, including three U.S. Navy test pilots and five subjects with different flying experiences. Results indicated that haptic feedback significantly improved hover performance, reducing MWL and enhancing SA, particularly in DVE. However, in the slalom task, predefined haptic guidance misaligned with pilots' individual control strategies, leading to performance degradation. This finding highlights the need for pilot-specific adaptive haptic feedback to mitigate inconsistencies in dynamic maneuvering tasks.
... [29][30][31][32][33], and pilot-vehicle system performance performance (Refs. [34][35][36][37][38][39][40]. ...
Conference Paper
Full-text available
This paper describes a combined visual and haptic localization experiment that addresses the area of multi-modal cueing. The aim of the present investigation is to characterize accuracy and precision of tactile cue-ing in the peri-personal space (PPS), the space around the body in which sensory information is perceived as meaningful (Ref. 1). Outcomes of the unimodal (visual and haptic) and multi-modal (combined visual-haptic) localizations are used to make predictions about the multimodal integrative phenomenon. In the localization experiment, participants are presented with visual, haptic, or multimodal target cues using the body-centered reference frame and are instructed to indicate the corresponding hypothetical target location in space using a mouse pointer in an open-loop feedback condition. Results of the study revealed that the visual and haptic spaces are characterized differently in terms of localization performance, providing important considerations for the transformation of each when combining them into a unified percept. The results reaffirmed many well known radial characteristics of the visual receptive field with respect to localization, and identified a nonlin-ear pattern of performance across the haptic receptive field that was largely influenced by the midline of the center of the torso and each side of the cutaneous region. Importantly, when combined, bimodal localization was at least as precise as the most reliable unimodal source over the entire tested region, and significantly more accurate for signals located in regions outside of the direct field of view. These results will help inform the development of future human-machine interfaces implementing haptic feedback mechanisms, specifically with respect to multisensory integration. In the context of pilot performance, haptic localization can have several benefits including enhanced situational awareness, improved spatial orientation, reduced cognitive load, enhanced training and skill development, and increased safety and error prevention. These benefits can be applied to future systems for safer aircraft handling by helping overcome visual illusions and discrepancies between visual and vestibular sensory channels, especially in degraded visual environments.
Article
Full-text available
This article describes a combined visual and haptic localization experiment that addresses the area of multimodal cueing. The aim of the present investigation was to characterize two-dimensional (2D) localization precision and accuracy of visual, haptic, and combined visual-tactile targets in the peri-personal space, the space around the body in which sensory information is perceived as ecologically relevant. Participants were presented with visual, haptic, or bimodal cues using the body-centered reference frame and were instructed to indicate the corresponding perceived target location in space using a mouse pointer in an open-loop feedback condition. Outcomes of the unimodal (visual and haptic) and bimodal (combined visual-haptic) localization performance were used to assess the nature of the multisensory combination, using a Bayesian integration model. Results of the study revealed that the visual and haptic perceptive fields are characterized differently in terms of localization performance, providing important considerations for the transformation of each sensory modality when combining cues into a unified percept. The results reaffirmed many well known radial characteristics of vision with respect to localization, and identified a nonlinear pattern of haptic localization performance that was largely influenced by the midline of the center of the torso and each side of the cutaneous region. Overall, the lack of improvement in precision for bimodal cueing relative to the best unimodal cueing modality, vision, is in favor of sensory combination rather than optimal integration predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-haptic targets would represent a compromise between visual and haptic performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition, vision. The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and haptic to better understand their interaction and their contribution to multimodal perception These results will help inform the development of future human-machine interfaces implementing haptic feedback mechanisms In the context of pilot performance, haptic localization can have several benefits including enhanced situational awareness, improved spatial orientation, reduced workload, thereby contributing to safer operations. These benefits can be applied to future systems for aircraft handling by helping overcome visual illusions and discrepancies between visual and vestibular sensory channels, especially in degraded visual environments.
Conference Paper
Full-text available
This paper demonstrates the development, implementation, and testing of full-body haptic cueing algorithms for augmented pilot perception. Full-body haptics is in the form of localized electrical muscle stimulation (EMS) is achieved via a commercial, off-the-shelf product called TESLASUIT. Cueing algorithms are developed for roll-axis compensatory tracking tasks where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to produce a control action. The error is displayed to the pilot using three different cueing modalities: visual, haptic, and combined visual and haptic. For the visual and combined visual and haptic modalities, visual cues are also considered in degraded visual environments (DVE). Full-body haptic cueing algorithms that are based on a proportional-derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle system (PVS) performance for the task in consideration when using haptic feedback only (no visual cues) and to improve PVS performance in DVE when using combined visual and haptic feedback. These results indicate that the use of secondary sensory cues such as full-body haptics to augment the pilot perception can lead to improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied.
Conference Paper
Full-text available
Various subjective scales (Bedford, NASA TLX) have been employed for rating pilot workload. Numerous measures (control activity, heart rate) have also been proposed as relative indicators of pilot workload. Borrowing concepts from classical and modern psychophysical research, this paper attempts to treat workload as a “sensation response” arising from an effective stimulus. Bedford workload ratings from two different compensatory tracking tasks appear to conform to Weber’s Law (the ratio of a perceived change in stimulus to the magnitude of the original stimulus remains constant ). Two candidate, and effectively equivalent, sources are implicated as the effective stimulus: 1) the product of the standard deviations (SD) of display error rate and pilot control rate; 2) the power spectral density of the error rate, shaped by pilot compensation. Factors that should be reflected in a workload estimator (i.e., the effect of vehicle dynamics on pilot compensation) are examined from manual control and spectral analysis perspectives, corroborating the stimulus sources. By applying a psychophysical treatment, it is shown how workload response for a given task can be characterized in concise terms such as sensitivity to stimulus (Weber fraction), , just-noticeable difference (JND), and dynamic range. A general relationship between stimulus and the JND is proposed, consolidating the opposing JND assumptions of Fechner’s law (constant JND), and Stevens law (JND is proportional to the stimulus magnitude). Excellent matching is obtained between actual and estimated Bedford ratings for the two tasks using the stimulus in a general power law function. This function follows from the proposed general JND relationship.
Conference Paper
Full-text available
Helicopter military missions such as combat search and rescue, medical evacuation and landing on unprepared sites can involve operating in hostile, low-altitude, and degraded visual environments (DVE). These conditions may significantly reduce the pilot's capability to use the natural out of the window (OTW) perceptual cues, increase workload and increase the risk of collision with terrain and natural or man-made obstacles. In modern helicopter cockpits, synthetic vision systems (SVSs) can employ conventional non-conformal two-dimensional (2D), egocentric three-dimensional (3D) conformal symbology (CS) and laser detection and ranging (LADAR)/ radio detection and ranging (RADAR)/ forward looking infrared (FLIR) imagery support guidance and control, especially during operations in DVE. Although 3D CS can decrease pilot workload, it can also produce attentional tunneling (cognitive capture) and may not provide maximally effective depiction of the environment around the helicopter. In this context, it is crucial to develop integrated multimodal interfaces that extend the current operational envelope while enhancing flight safety. Several flight simulator studies have investigated the use of spatial auditory displays (SADs) in combination with spatially and temporally congruent visual displays in tasks as diverse as collision avoidance, intruding aircraft detection, or system malfunction warning. In this paper we propose a novel approach to spatial sonification design based on the premises that perception-based synthetic cueing can increase situation awareness (SA), improve overall performance, and allow mental workload to be kept at operationally effective levels. This paper discusses the development, implementation, and evaluation of a sensor-based augmented-reality spatial auditory display (ARSAD) and its visual analog, an integrated collision avoidance display (ICAD) for all phases of flight. Five UH60M Army pilots participated in a low-level flight simulation evaluating the visual and the auditory displays, alone or in combination in low-visibility and zero visibility environments. The results are discussed in the context of pilot cueing synergies for DVE.
Conference Paper
Updates to the military rotorcraft handling qualities specification are currently being considered that address the high-speed flight regime envisioned for the Future Vertical Lift platform of the US Army. A team that features industry and academia have developed and evaluated a set of Mission Task Elements (MTEs) that are defined to address VTOL high-speed handling qualities. Following the mission-oriented approach upon which ADS-33E-PRF is based, the MTEs were designed to meet different levels of precision and aggressiveness. Tracking MTEs based on a sum-of-sinewaves (SOS) command signal were defined for precision, aggressive and precision, non-aggressive applications. The command signals are derived from fixed wing analogs that have long been used to evaluate aircraft handling qualities. While the precision, aggressive SOS tracking tasks, the primary subject of this paper, are surrogates for air-to-air tracking and nap-of-the-earth tracking, the known forcing function allows for complete open- and closed-loop pilot-vehicle system identification. The MTE objectives, descriptions, and performance criteria were assessed and refined via several checkout piloted simulation sessions. Formal evaluations were then conducted by Army test pilots at four simulator facilities, each featuring a unique high-speed platform including a generic winged compound helicopter, two tiltrotor configurations, and a compound helicopter with coaxial rotors. To aid in the MTE evaluation process, baseline VTOL configurations were varied to achieve different handling qualities levels. Quantitative measures based on task performance and qualitative measures based on pilot ratings, comments and debrief questionnaires were used to assess MTE effectiveness. The piloted simulation results demonstrated that the sum-of-sines tracking MTEs provided an effective means to discern precision, aggressive handling qualities in high speed flight.
Article
In this paper, we describe the design and evaluation of a head-mounted tactile prototype and multi-parameter coding scheme to support situational awareness among users. The head has been selected as the location for the interaction because it has been relatively under-researched compared to the torso or hands, and offers potential for hands-free attention direction and integration with new head and eyewear technology. Two studies have been conducted. The first examined the user's ability to discern three-parameter tactile signals presented at sites on the head. Findings highlighted that while multi-parameter cues could be interpreted with low error, challenges were faced when interpreting specific combinations of waveform and interval type, and when performing identification of interval pattern and stimulation location while visually-distracted. A second study investigated how use of the three-parameter tactile coding scheme impacted participants’ situational awareness under several exertion conditions. Significant interaction was found between the exertion conditions and subjective cognitive workload. The relationship between situational awareness phases of participant SAGAT assessment scores were consistent between conditions, with perception and prediction phases outpacing comprehension. This suggests, pending further study of the suitability of situational awareness evaluation methods for tactile perception, that quickly trained participants may struggle to understand multi-parameter coding intended to convey changing events. Interpretations of coding schemes were found to vary, highlighting the importance of carefully selecting and mapping signals for presentation. Insights from our study can support interface designers aiming to heighten levels of spatial and situational awareness among their users through use of the tactile channel.
Article
The National Research Council of Canada and Defence Research and Development Canada flight-tested the U.S. Naval Aerospace Medical Research Laboratory's Tactile Situational Awareness System (TSAS) in a dynamic task. The TSAS vest uses small pneumatic actuators or ?tactors? to transmit information to the pilot. Eleven pilots used the TSAS to cue horizontal axis performance in a land-based deck landing task flown in the NRC Bell 205 helicopter. Pilots tracked a vertically moving target with and without the TSAS in good and degraded visual conditions. The TSAS effectively cued longitudinal fore/aft drifts and reduced RMS error. It had less effect on lateral positioning error, possibly due to the presence of strong visual cues. Pilot situational awareness during degraded visual environment conditions in high sea states was significantly improved by the TSAS, as measured by the China Lake situational awareness rating scale. No change in workload, as measured by Modified Cooper Harper Workload Scale, was attributable to the TSAS use. The improvements in situational awareness and the reduction in longitudinal error suggest that the TSAS would be beneficial for helicopter ship deck landing.
Article
Aviation in itself is not inherently dangerous. But to an even greater degree than the sea, it is terribly unforgiving of any carelessness, incapacity or neglect. Abstract Spatial disorientation and the subsequent loss of situation awareness account for a significant percentage of fatal mishaps in aviation. In our normal earth-bound environment, spatial orientation is continuously maintained by correct information from three independent, redundant, and concordant sensory systems: vision, the inner ear or vestibular system, and the skin, joint, and muscle receptors or somatosensory system. However, in the aviation environment, the vestibular and somatosensory sensations currently do not provide accurate orientation information. The only reliable source of orientation information is vision. For this reason, spatial disorientation mishaps occur when information from the visual system is compromised (e.g., temporary distraction, increased workload, transitions between visual meteorological conditions and instrument meteorological conditions, reduced visibility, or boredom), and the pilot subsequently becomes disorientated. The Tactile Situation Awareness System (TSAS 1) is an advanced flight instrument that uses the sensory channel of touch to provide situation awareness information to pilots. The TSAS system accepts data from various aircraft sensors and presents this information via tactile stimulators or "tactors" integrated into flight garments. TSAS has the capability of presenting a variety of flight parameter information, including, attitude, altitude, velocity, navigation, acceleration, threat location, and/or target location.
Article
During the past several years, an analytical theory of manual control of vehicles has been in development and has emerged as a useful engineering tool for the explanation of past test results and prediction of new phenomena. An essential feature of this theory is the use of quasi-linear analytical models for the human pilot wherein the models' form and parameters are adapted to the task variables involved in the particular pilot-vehicle situation. This paper summarizes the current state of these models, and includes background on the nature of the models; experimental data and equations of describing function models for compensatory, pursuit, periodic, and multiloop control situations; the effects of task variables on some of the model parameters; some data on “remnant”; and the relationship of handling qualities ratings to the model parameters. Copyright © 1967 by The Institute of Electrical and Electronics Engineers, Inc.
Article
A control-theoretic procedure for modelling human pilot pursuit control behaviour is presented. The procedure allows for the development of human pilot behavioural models in multi-loop flight control tasks in a simplified framework emphasizing frequency-domain techniques. Beginning with the primary control loops, each control loop is closed using a combination of output-rate feedback and output-error feedback. It is demonstrated that this approach can accommodate any vehicle dynamics that can be stabilized by a human pilot. In addition, the modelling approach identifies vehicle dynamics that approach the limits of human pilot controllability. The well-documented increase in pilot effective time delays that have been shown to accompany vehicle dynamics requiring lead compensation is also replicated by this modelling approach. A method for predicting handling-qualitiy levels that would be assigned to a particular vehicle and task is presented. A visual cue model is included, which can approximate the effects of degraded visual cues. It is shown that this model can be used to reproduce the three most important measurable effects of visual cue quality upon human operator dynamics, namely, an increase in ‘effective’ pilot time delay, a decrease in crossover frequency, and an increase in error-injected remnant. The ability of the modelling procedure to accommodate different levels of pilot aggressiveness in completing manoeuvres is demonstrated. Finally, an application to a multi-axis rotorcraft flight control problem is presented.
Article
The description of human pilot dynamic characteristics in mathematical terms compatible with flight control engineering practice is an essential prerequisite to the analytical treatment of manual vehicular control systems. The enormously adaptive nature of the human pilot makes such a description exceedingly difficult to obtain, although a quasi-linear model with parameters which vary with the system task variables had been successfully applied to many flight situations. The primary purposes of the experimental series reported are the validation of the existing quasi-linear pilot model, and the extension of this model in accuracy and detail.