Content uploaded by Chris Hill
Author content
All content in this area was uploaded by Chris Hill on Nov 29, 2019
Content may be subject to copyright.
49th International Conference on Environmental Systems ICES-2019-108
7-11 July 2019, Boston, Massachusetts
Copyright © 2019 University of Colorado Boulder
Development of an Augmented Reality System for Human
Space Operations
Carlos Pinedo
1
, Jordan Dixon1, Christine Chang
2
, Donna Auguste
3
, Mckenna Brewer
4
, Cassidy Jensen4, Chris Hill
5
,
Devin Desilva5, Amanda N. Jones5, Allison P. Anderson
6
, James S. Voss
7
University of Colorado Boulder, Boulder, CO, 80303
In this work we develop an augmented reality heads up display for astronaut use during
human space operations. This work takes advantage of recent advances in commercial heads-
up-display technology to simulate information delivery to the astronaut. The primary design
objectives were to increase situation awareness (SA), provide timely information to the user
and supporting personnel, and facilitate communication among all system elements (user,
ground control, and intravehicular astronauts). The design includes a visual interface that
provides on-demand information in both egocentric (fixed to the user) and exocentric (fixed
to the environment) perspectives. The information includes spacesuit informatics, checklist
procedures, communication information, and basic navigation. The design also includes an
audio interface that receives verbal commands from the user and provides auditory feedback
and information. A novel method of interacting with the augmented reality system was
explored: electromyography. Electromyography receives electrical signal output from muscle
groups on the user’s body and is able to map those as specific inputs to the augmented reality
system. In this way, the user’s hands and voice are free to complete other tasks as necessary,
while still maintaining a mode of communication with and control of the device. To aid in
communication among all elements, remote display control via telestration (the ability of a
remote user, such as ground control or another astronaut, to draw over a still or video image)
was included. This provided a means of visual communication to facilitate task completion,
aid in emergency situations, and highlight any anomalies thereby increasing user situation
awareness and decreasing workload. Additional capability was provided for object-tool
recognition and basic navigation assistance. Preliminary testing highlighted the potential
benefits of the following critical design elements: minimalistic visual display, redundancy of
interaction through modalities, and continuity between internal and external display elements.
I. Introduction
spacesuit presents many challenges to an individual working within it: reduced mobility due to mechanical design
and actuating a pressurized environment, respiratory stress of extreme pressure changes and environmental
compositions, overexertion due to high humidity, high actuation forces and fluctuating thermal loads, and exposure to
harmful radiation.1 However, these challenges relate only to maintaining proper physiological functioning. The jobs
performed while wearing these suits are some of the most resource-intensive activities with respect to physiology and
psychology. Thus, the ultimate design of these suits cannot purely be based on the maintenance of physiological
function, but rather that of another critical design driver: the maintenance – and enhancement – of performance.
Performance is often thought of from merely a physical standpoint. However, in the high-risk and high-stress
environments that astronauts are exposed to during extravehicular activity (EVA), cognitive performance and situation
awareness (SA) become increasingly important factors in maintaining a safe working environment. Currently, the suit
being used for International Space Station (ISS) operations and those used in the Apollo missions – the Extravehicular
1
PhD Student, Smead Aerospace Engineering Sciences, ECOT-634, 429 UCB, Boulder, CO 80309
2
PhD Student, Computer Science, ECOT 717, 430 UCB, Boulder, CO, 80309
3
PhD Student, Technology, Arts and Media Studes, ATLAS Building, 320 UCB Boulder, CO 80309
4
Undergraduate Student, Technology, Arts and Media Studes, ATLAS Building, 320 UCB Boulder, CO 80309
5
Undergraduate Student, Computer Science, ECOT 717, 430 UCB, Boulder, CO, 80309
6
Assistant Professor, Smead Aerospace Engineering Sciences, ECOT-634, 429 UCB, Boulder, CO 80309
7
Scholar-in-Residence, Smead Aerospace Engineering Sciences, ECOT-634, 429 UCB, Boulder, CO 80309
A
International Conference on Environmental Systems
2
Mobility Unit (EMU) – provides the astronaut access to all of their necessary informatics through the Display and
Control Module (DCM) mounted on the hard-upper torso; this requires astronauts to cycle through parameters on the
display and use a wrist-mounted mirror to read the control positions (e.g. temperature control, comm volume, suit
oxygen actuator control, etc.).2 In a nominal scenario, these suit parameters and control settings will be checked only
rarely to ensure expected values. However, when mission procedures go off-nominal, astronauts’ awareness of the
state of the suit is critical.
U.S. ISS EVA 23, performed in July of 2013, is a perfect case-study of the importance and downfalls of the
current methods used to monitor the EMU system performance. On this mission, European Space Agency (ESA)
astronaut Luca Parmitano experienced a High Visibility Close Call event where water from his cooling system leaked
into the interior of the suit and collected around his head.3 Because of the relatively high effort required to monitor
the suit informatics closely, the team did not catch a coolant temperature drop which could have helped isolate the
problem before the risks were severe. This example demonstrates the importance of having SA of suit informatics, as
well as redundant forms of receiving that information. With the informatics provided through the heads-up display
(HUD) proposed below, both EVA and intravehicular activity (IVA) crewmembers may have had enhanced SA and
detected the problem before it became a larger safety issue. Furthermore, with the introduction of an automated
Caution and Warning System (CWS) provided through a HUD, it is unlikely the situation would reach such a critical
state even if the coolant temperature drops initially went unnoticed.
During Space Shuttle mission STS-49, two unsuccessful EVAs were conducted before a third, successful attempt
to catch the Intelsat stranded in an unusable orbit. The tools and procedures that were used on that third EVA varied
drastically from the original plan, and this was also the first time that three crew members were simultaneously on an
EVA. From the crew report following the mission, it was noted that “the successful completion of the Intelsat re-boost
mission would not have been possible without the direct support of the ground team who worked together with the
crew to overcome the difficulties that were encountered.”4 For future missions to farther locations, this kind of reliance
on the ground team will not be possible, and more autonomy must be provided to allow the EVA and IVA astronauts
to collaborate and communicate. In the design described below, autonomy is increased through customized digital
task procedures, object recognition, multiple navigation aids, and IVA telestrator assistance. In this way the entire
team can work independently or together effectively regardless of relative location or previous training.
The objective of this research is to investigate heads-up-display (HUD) information delivery founded from best
principles in literature (Slutsky5, Marquez20, Sim17, Pearl8, Hegarty9). Further, we aim to use traditional HUD user
interfaces (UI’s) and compare to novel UI’s using emerging technologies in order to investigate which
HUD/augemented reality (AR) elements are most critical for use in EVA though human factors evaluation. What
follows are detailed descriptions of the elements of our designs for an AR system for use in space operations, and a
proposed methodology for testing the design in a simulated space environment across three scenarios: IVA, EVA, and
emergency procedures. We hypothesize that providing astronauts with detailed, real-time, user-friendly informatics
about their suit, environment and assigned tasks will improve performance and SA during complex high-stress human
space operations.
II. Design Description
The designs are implemented using a Microsoft HoloLens
and include a visual interface that provides on-demand
information in multiple reference frames: egocentric (fixed to
the user), exocentric (fixed to the environment), and an
augmented reference frame (fixed to physical peripheral
device). The Microsoft HoloLens is currently implemented as a
wearable virtual reality headset with transparent lenses that
place information on the goggle-like lenses. (Fig. 1) The
information provided to the user includes spacesuit informatics,
checklist procedures, communication information, and caution
and warning alerts. The design also includes an audio interface
that receives verbal commands from the user and provides
auditory feedback and information. Novel additional features
include the use of EMG to interface with the display, telestration
for intra-astronaut visual communication, and basic navigation.
A detailed description of each of these design elements is included below.
Figure 1. Microsoft HoloLens
International Conference on Environmental Systems
3
A. Visual Interface
One of the most important design aspects to consider is the visual interface projected by the Microsoft HoloLens
(or other AR technology) that the astronaut will see and interact with during space operations. The visual interface
can be subdivided into egocentric (virtual objects that are fixed to the user’s perspective) and exocentric (virtual
objects that are fixed or anchored in global space) perspectives. The egocentric objects move with the user, and consist
of data that should be accessible to the astronaut at all times, such as navigation or location information,
communication status, and spacesuit telemetry. The data structure is hierarchical and prioritizes and displays the most
critical information (as identified through a functional task analysis) to the astronaut; other information that is not
constantly visible is available with a voice command, gesture, or an electromyography (EMG) prompt. First introduced
on the team’s baseline design (Fig. 2, referred to here as AR1) was an optional egocentric, condensed task list. The
condensed task list uses acronyms and abbreviated phrasing, consistent with the currently used abbreviated procedure
lists, to achieve a low-clutter, HUD-fixed procedure. A pilot study indicated that a condensed task list was preferable
during simple portions of the procedure, or for the trained crewmember since it required minimal additional effort and
did not require additional details to complete the task at hand.
Figure 2. Egocentric HUD. Time (top left); suit informatic menu (left center); shortened task list (bottom center);
user defined scratchpad (right center); a caution and warning alert (red text); keyword reference list (upper right)
The exocentric aspects of the design are anchored in space instead of fixed to the user’s frame of reference. EVA
crew members currently use a wrist-worn cuff checklist for abbreviated procedures for tasks during their spacewalks.
The cuff checklist can be difficult to refer to while simultaneously completing a complex task. Based on input from
one of the team’s advisors, former astronaut Jim Voss, the current cuff checklist is only suitable for short procedures;
the best way to display more detailed or lengthy EVA task procedures would be to pin them to an area next to the task
site. The baseline design leveraged a peripheral physical clipboard to achieve this pinning feature which is discussed
later in this section. In addition to this capability, the astronaut can manipulate the pinned virtual object (e.g. location,
information presented) through HoloLens gaze, voice, or EMG command. This method avoids the EVA crewmember
needing to carry a physical clipboard (or cuff checklist), which is beneficial for tool- or mass-constrained missions
(e.g. planetary surface EVA).
International Conference on Environmental Systems
4
Figure 3. Physical peripheral HUD. Procedure "clipboard" showing rich information presentation not presented on
egocentric task lists.
In addition to the virtual checklist display described above, an external “clipboard” is included as a back-up that
displays space operation task instructions in a more natural way. The clipboard consists of a physical object with an
image (e.g. QR code) that is recognizable to the HoloLens as a cue to display the current procedures in a virtual
manner. The physical clipboard can be tethered to the astronaut’s Mini Workstation (MWS), so easily viewable in
their work zone, and accessed as desired (Fig. 3). The use of a physical peripheral device was implemented in the
baseline design. Users during the pilot study indicated a preference for this implementation since it bridged the gap
between reality and the virtual environment.
Another exocentric feature that has already been proven in pilot testing is the use of QR codes to identify specific
tools or locations (Fig. 4). These cues assist an astronaut who needs help identifying a specific tool or work location.
The exact name of the tool, for example, would appear upon scanning with the HoloLens, rather than having to search
through a toolbox, examining tools individually. This feature serves as a precursor for more advanced object
recognition methods being pursued, described in a later subsection.
Figure 4. Object Recognition. Visual identification of tools, parts and internal hardware components using object
recognition of fiducial markers.
The baseline design explicitly presented informatics textually instead of graphically. Graphs can present data in
a more concise and less complex format.5 This current design uses various methods of displaying the astronaut’s
biometrics or spacesuit telemetry to make the information easier to read and interpret, and allow for more data to be
displayed in the restricted field of view of the HoloLens, effectively increasing SA. The graphical format also makes
certain types of data easier to monitor temporally; for example, if an astronaut’s coolant temperature is dropping off-
nominally, as in U.S. ISS EVA 23, they are able to see that activity on the graph as opposed to an increasing textual
International Conference on Environmental Systems
5
number. Furthermore, cautions and warnings are displayed egocentrically textually, depending on the particular event
or issue.
One unique challenge in space operations (specifically in-orbit operations) is the shifting light environment that
astronauts experience in the many sunsets and sunrises that occur during a Low Earth Orbit EVA. In bright conditions,
the glare from the sun can impact an augmented reality display’s clarity and legibility and induce digital eye strain.
According to Dr. Rosenfield from the SUNY college of optometry,6 glare causes significant discomfort for users.
Alternatively, the brightness of the AR display in dark situations can also cause digital eye strain. The current design
(referred to here as AR2) uses the ambient light sensor to detect variations in lighting and adjusts the screen
automatically for the user. Voice commands and gestures for these adjustments can also be used to adapt to the unique
preferences of each individual, which is discussed further in later subsections.
B. Auditory Interface
Audio interfaces can be an intuitive and efficient mode of information exchange,7 while reducing visual clutter
and streamlining cognitive processing. During a complex task, the astronaut can focus on the task at hand while using
verbal commands to manipulate the visual interfaces, and receive audio feedback or cues from the system.
Crewmembers and ground control users may request audio readouts of data such as bioinformatics or spacesuit
telemetry. If a user prefers to forgo or augment visual procedure instruction, the user may utilize the integrated audio
cues for that procedure, navigating through the instruction and requesting additional information with verbal
commands.
The auditory interface’s keyword-based command system (Fig. 5) is based on the principles of voice command
design.8 The system is capable of simultaneously supporting the EVA crewmember, the IVA crewmember, and ground
control, and can enhance the IVA-to-EVA (I2EVA) interactions in two primary ways. First, the AR system can
supplant the book-keeping / guidance role of the IVA crewmember during an EVA procedure. It can track the EVA
crewmember’s status in completing that task, and provide incremental verbal instruction for the task-step at hand.
Leveraging the auditory interface as a mediator, the informatics displays may have low opacity and intensity defaults
because the required information can be highlighted only when needed. This would effectively increase SA by
decreasing visual clutter and augmenting cognition.9 Second, the AR system can also respond to IVA crewmember or
ground control requests of EVA crewmember biometrics, suit status, etc. For communications requiring the EVA
crewmember to respond verbally to the IVA – such as responding to a query about suit tear or affective responses –
the system can facilitate the EVA crewmember’s response by rapidly accessing relevant system information and
presenting it to the EVA crewmember. It is anticipated that automatically displaying informatics based on queries of
IVA requests will lead to quicker response time and lower response error.
Figure 5. Reference for auditory commands. Callable master list detailing all available auditory commands for
manipulating various aspects of the system (e.g., procedures, layouts, submenus, etc.).
C. EMG Interface
One of the main non-verbal user interaction modalities for the HoloLens is hand tracking and hand gestures.
However, dexterity constraints and limitations imposed by the EVA, such as hands being preoccupied with current
International Conference on Environmental Systems
6
tasks, lapses of gesture recollection, and hand fatigue in the EMU all pose significant difficulties when gesture
interfacing with the HoloLens while inside the EVA suit.10 While traditional methods of tracking hand gestures may
not be possible, our design investigates the use of an alternative to traditional hand gestures by using EMG. EMG is
a biomedical technique that has been used to measure the electrical potential generated by muscles during contraction
or movement. The signal generated by the muscles is dependent on the activity of the muscle group being measured,
meaning the amount of force exerted by the muscles allows for varying thresholds of detection.
There are two main techniques used to gather EMG signal data: invasive and non-invasive sensing. The peripheral
device proposed will use non-invasive sensing, gathering signal data from electrodes placed on the skin or sewn into
articles of clothing, located on muscle groups not rigorously used by the astronaut during an EVA. Furthermore, this
technique also allows for a level of comfort customizability for the wearer.
While EMG is not yet a well-established interaction method, EMG interfaces have been shown in research studies
to yield a significantly lower tracking error when compared to the more established force and joystick interfaces.11
Due to the issue of motion artifacts when the electrode comes into contact with the EVA suit and other obstacles not
yet discovered, the development of this EMG device will be a trail testbed where the team will attempt to overcome
the different challenges towards actual operational use by implementing both well-researched and novel approaches.
The EMG signal processing will be an essential part of creating a reliable peripheral device. Accurate and precise
detection of true-positive gesture triggers is an important issue in the design and implementation of the EMG device,
especially due to motion artifacts affecting the reliability of the device. For the signal analysis and processing, the
improved double threshold method proposed by Xu, Lanyi and Adler12 will be implemented, this method was chosen
based on the more sensitive, stable, efficient signal processing and the decreased computational cost when compared
to two other main EMG detection methods. In order to implement the Lanyi and Adler EMG signal methodology, our
device will first intake a raw analog signal which will then be converted to a digital signal where gesture events will
be triggered by a series of discrete numbers we will program commands for.
D. Remote Control Display Interface
The current design (AR2) includes an investigation in the use of telestration to assist astronauts in identifying
resources or tools. The telestrator has been used to clarify the actions and point of interest in sports media since 1982.13
Telestration involves a remote user (not the person wearing the device) to augment the display with drawings
(illustrations) and text, generally to elucidate an instruction or procedure. The implementation of this technology
would allow astronauts to ask for the location of specific items and gain visual aided guidance from mission control
or the IVA astronaut. Some directions may include circling an object, highlighting locations for the next step, and
providing navigational assistance.
A similar application of this concept is the System for Telementoring with Augmented Reality (STAR). The
STAR project is designed to increase the mentor and trainee sense of co-presence through an augmented visual channel
that led to measurable improvements in the trainee’s surgical performance. During a trial where surgery residents were
asked to perform a lower-leg fasciotomy on cadaver models, researchers found “participants who benefited from
telementoring using our system received a higher Individual Performance Score; and they reported higher usability
and self confidence levels.14”
Implementing this system of “telementoring”, astronauts would be able to receive assistance when they require
it. Smart computer systems can improve many aspects, but technology has not been perfected. Telestration will allow
for human input to assist where programs fail.
E. Additional Features Under Investigation
1. Object Recognition
Using Unity and Vuforia (commercial applications for game development and object recognition, respectively)
we can create an object model database for storing recognition models. With 3D object scanning we are able to trigger
virtual events upon recognition. These include displaying interactive task procedures, highlighting and outlining
unique objects, and triggering visual navigational aids. The term ‘event trigger’ is being used because the EVA
crewmember may have multiple ways of triggering virtual events, such as voice or EMG gestures. Using object
recognition, the AR system may highlight the unique objects used for a task or task step. This feature can be toggled
on-off by the user. When the EVA crewmember enters object recognition mode, each unique element is minimally
highlighted to signify that it can be interacted with. When the EVA crew member looks at a unique object registered
in the database it triggers an event which highlights the object in question and display information associated with that
object. The information displayed can then be moved around the EVA crewmembers’ visual field via movement of
the tool, and pinned for more control over the visual space.
International Conference on Environmental Systems
7
The team is exploring the possibility of using the assistance of computer vision and training an object recognition
model (convolutional neural network). The model could assist the EVA crew member with identifying undefined
objects. If the model has a higher percentage of uncertainty about an undefined object the program could present a list
of options that the crew member could then select from, and would then load the associated information with the
object.
The “Vuforia AR SDK” shows promise as the process of the software development kit (SDK) which we can use
to analyze possible integration into a Machine Learning model. The tracker contains the computer vision algorithms
that detect real world objects in the camera video frames. Based on the camera image, different algorithms take care
of detecting new targets or markers, and evaluating virtual buttons. The results are stored in a state object that is used
by the video background renderer and can be accessed from application code.15 The team will look into integrating
the information stored in these state objects to help with identifying undefined objects.
2. Navigation
In order to successfully navigate the outside of the International Space Station, an astronaut can be aided by a
route shown to them step-by-step. If the astronaut is required to use certain tools for a task, the location of the tools
can be highlighted. Once the task is completed, another path can be displayed, aiding navigation from their position
to return the equipment to storage locations and end the task. In addition to location indicators and wayfinding arrows,
a minimap (small map showing the user’s location and orientation in immediate vicinity) will contain the main
objective(s), tool locations, and other task-oriented relevant cases on the ISS.
To implement navigation markers and paths within our user interface (UI), we will use various application
program interfaces (APIs), scripts, and toolkits to establish objective markers and a minimap of the surroundings to
the astronauts. The objective marker UI will be implemented using the HoloToolkit for Unity, the scripting API.
Objective markers will be called in response to the astronaut’s request for a graphical representation of objective
directions, and will use the WorldAnchor class built into the UnityEngine. One option for further minimap
improvements is usage of the HoloLens textured 3D minimap project application, which gives greater detail of the
user’s surroundings by taking pictures with the HoloLens, creating a spatial map of the area, and using those pictures
to create a fairly accurate 3D representation of the surrounding area.16
To further establish a path that the astronaut can utilize, the spatial mapping renderer feature in Unity can be used
to create a path to get to the objectives. Utilizing Stimulant, a HoloLens app, we know that it is possible to create a
path for the user. Using the Wayfinding and spatialized audio capabilities of the HoloLens, important tools needed for
a task would be marked with the aforementioned objective marker UI. A further enhanced implementation would
require greater compatibility with Object Recognition (see ‘Object Recognition’ section).
The minimap can show the entire external area of the ISS, including cargo and crew capsules. The minimap can
be placed at any location in the user’s view, and markers will be placed in the map to signify where the user is with
respect to the larger map, as well as to indicate the desired direction of navigation. If the minimap becomes a
distraction, the user can eliminate the minimap from the display. The spatial mapping renderer feature in Unity is
necessary to create the previous features and to give a detailed map of the station. Once the minimap’s program is
completed, we would need to render it into the HoloLens software and adjust for further utilization using audio
commands and EMG gestures in a spacesuit.
F. Adaptive and Adaptable Interface Elements
Human-centered design takes an approach of looking first at the human, their limitations, and their preferences
during the initial design of a system. This becomes very important in the context of high-workload tasks requiring
most, if not all, of the operators’ attentional resources. The presentation of too much or too little information can cause
information overload or reduced situation awareness, respectively. It has been shown that operators prefer computer
intervention only if absolutely necessary to maintain the integrity of the system.17 Moreover, even important
information conveyed at the wrong time (e.g. during a task that requires the operator’s full focus) can force task-
switching that limits attentional resources18-20. These preferences, however, are variable between persons.
Multisensory displays and adaptable and adaptive automation have the potential to overcome this limitation of
the classical design approaches. A system that can modified by the user, adapt to new environments, and carry
redundant modes of operation and information presentation has the potential to helpful in all domains with any
individual. Additionally, it can help overcome another fundamental flaw in automation design: the automation can
only function as well as the designer can predict the situations it will encounter. When sending humans to relatively
new planetary bodies, where the crew will encounter situations that cannot be planned for, automation adaptability
will be crucial to ensure flexibility. Therefore, the automation framework must be made as robust as possible, giving
the human-operator the ability to alter the automation in real-time as they see fit for their situation.
International Conference on Environmental Systems
8
The baseline design (AR1) included some flexible elements that proved desirable as key elements of the display
based on expert feedback. These elements included a “Scratch Pad” that acted as a custom-built submenu of the
specific statistics desired to be tracked by the operator and a semantic dictionary allowing for the system to learn user-
preferred, synonymous verbal keywords. The focus of the current design (AR2) is twofold: 1) expand the previously
developed capabilities to incorporate personalization of more elements of the display, and; 2) take advantage of the
operator training session prior to testing as an opportunity for the system to learn the preferences of the operator
(analogous to EVA training periods for astronauts). As some examples of the former, the display will have the ability
to provide varying formats of displayed information (e.g. textual vs. graphical) as well as multiple levels of fidelity
(e.g. gauge showing current state vs. time-history plot) in alternative frames of reference (e.g. global- vs. local- vs.
hybrid-space). We also aim to provide personalization of global parameters of the visual displays such as nominal
brightness and contrast, depth of field, color scheme, etc.
III. Methodology
The designs described above will be evaluated in a simulated space environment at the University of Colorado
Boulder Bioastronautics laboratory. The human subject experiment has been approved by the University of Colorado
Boulder Institutional Review Board. During the experiment, the subject will be timed, videoed, and motion-tracked
while they perform human-in-the-loop simulation tasks throughout the mockup environment. These data will allow
for quantification of the task completion times, and qualitative assessment of how the subject completes each task
(noting collisions, navigation paths taken, and errors made on procedures) as measures of performance. They will also
provide information on the specific commands and informatics used by the subject, and geospatial information during
task completion to inform future iterations of the design. Subject comments throughout the task will also be annotated
and used for qualitative analysis of system functionality.
Upon completing a simulation, the subject will complete a questionnaire on the usability of the system using a
modified System Usability Survey (mSUS), featuring 10 questions asking the subject about their perception of
usability of the environment on which they just completed the test. Subjects answer these questions on a 5-point Likert
scale.21 Subjects will also be asked about their subjective assessment of workload after completing the task, using a
shortened version of the standardized NASA Task Load Index (TLX) survey.22 For efficiency of data collection, the
subjects will use an electronic format with slide-rule bars that have qualitative bounds (i.e. low, high), that is
quantitatively encoded into various workload scores.
A. Simulation Environment
The simulations will include following a set of procedures that consist of a combination of various actions, such
as pressing buttons, unlocking containers with keys, using basic tools (e.g. screwdriver, wrench, etc.), using a display
interface, reconfiguring volume simulators (e.g. empty cardboard boxes), or navigating across the simulation
environment. The “inside” spacecraft mockup environment is circular and approximately 3 meters in diameter (Fig.
6). It includes sleep and science stations (Fig. 6b.), and communications and galley stations (Fig. 6c.).
The “outside” EVA environment is an empty rectangular space approximately 3 x 4 meters in which the subject
will interact with virtual objects and volume simulators, located directly “outside” the mockup door. For safety, the
participants will be instructed to walk at all times when navigating through the environments, even during simulated
“emergency response” scenarios.
Subjects will complete three different simulations:
1) “Inside” intra-vehicular activity work station task (IVA)
2) “Outside” extra-vehicular activity work task (EVA)
3) Emergency response task (EMR)
The simulation order will be randomized for each subject such that each of the 6 possible permutations of the three
simulations are assigned to at least one subject in each environment. To minimize the time commitment to perform
the experiment, subjects will perform the three simulations in one of three possible (randomly assigned) environments:
1) No Augmented Reality HUD (NAR)
2) AR HUD design #1 (baseline) (AR1)
3) AR HUD design #2 (AR2)
International Conference on Environmental Systems
9
Figure 6 Spacecraft mockup environment. Habitat is roughly 3 meters in diameter and 6 meters tall.
B. Data Analysis
We anticipate enrolling 36 subjects (12 per environment) who meet the age and pre-screening criteria. Estimating
the effect size for a linear mixed model test, with an α=0.05 and a power of β=0.80, this experiment would be able to
detect η2 larger than 0.27.
To evaluate measures of performance between environments (between subjects) a linear mixed model (possibly
repeated measures ANOVA) will be used. Performance will be assessed using the timing and video data. These
measures are likely to be normally distributed because they are continuous variables. The statistical factor will be
experimental condition/environment (AR condition). Since it is possible for any given task to be slower or faster than
the others purely by task design, task number will also be included as a blocking factor. Tasks will be specifically
designed to aim for equivalent difficulty, and any effects due to task differences would be avoided by the
counterbalanced ordering. However, this statistical model would allow us to quantify these sources of variability.
The measures of effectiveness, such as deviation from protocol and task errors, are likely to be rare events and not
normally distributed. Therefore, nonparametric statistics will be used. The Wilcoxon-Signed Rank test for dependent
samples will be used. Each of these measures across all tasks within each AR condition will be pooled since the
primary interest is on differences between AR environments, rather than within a task group. This will also increase
the power for the nonparametric test. Measures of subjective preference graded with Likert scales and the mSUS will
also be assessed using the Wilcoxon-Signed Rank test.
IV. Conclusion
The most prominent design aspect that makes the designs unique is the focus on minimalism in the visual modality.
As the visual modality is the richest information processing system, it can be catastrophic to performance to inhibit in
any way. With a combination of minimal backgrounds (i.e. virtual objects), minimal head-fixed elements, the ability
to easily remove components of or the entire display, and use of monochromatic green text for high-visibility on all
background environments the HUD proved to be not a distraction during preliminary testing of the baseline design.
Similarly, the ability to open and close glossaries of either informatics, or auditory commands for manipulating the
display with a single command proved to be desirable and helpful, especially for those quickly trained on the system.
This could be analogous to Just-In-Time training performed with astronauts for mission tasks occurring far beyond
ground-training.
Furthermore, the team’s expert advisers particularly thought the verbal readout capability was one of the most
beneficial aspects of the design, as it liberates the crewmember from the need to communicate directly with an IV
crewmember or continue to reference a visual checklist (be it physical or virtual). Additionally, this functionality
reduced the crewmember reliance on having good memory of the HUD organization and architecture. If desiring to
analyze specific statistics, they did not have to remember where in the menu hierarchy the statistic was located, nor
request it to be visually pushed to the scratchpad which helped maintain focus on the current task.
Another notable aspect of the design was the implementation of the Scratch Pad that enabled the crewmember to
build a custom submenu of statistics they personally wanted to monitor. All feedback received on this aspect during
preliminary testing was unanimously positive. Though there was little need to leverage the scratch pad during the
tasks completed in preliminary testing, reports and expert advice suggested this feature would be “liberating”.
Aspects of the design that had promising potential, but needed further improvement included the layout structure
of the head-fixed (internal) HUD, reliance on speech commands, and consistency between ego- and exocentric visual
International Conference on Environmental Systems
10
HUD components. The layout structure of the egocentric HUD elements was something the team analyzed thoroughly
in the early stages of design. We believe, however, that the comments received relate not to the spatial locations of
the HUD elements, but rather the visual alignment, and justification of text within the display to create an “invisible”
structure across HUD elements.
The reliance on speech commands was a deliberate design choice due to the inherent limitations of hand gestures
in an EVA environment (i.e. limited dexterity and increased effort associated with making hand movements). The
team holds its stance on the importance of leveraging speech commands over hand gestures, though perhaps with
future AR technology advancements, gaze tracking inside of the head- or helmet-mounted display will improve. If so,
this provides an exciting new realm of interaction with the HUD. It certainly has the potential to be the most desirable
mode of interaction as it would require practically no effort on the part of the user, and overcomes the severe reliance
on speech recognition to manipulate the system.
One of the common pieces of feedback the team found particularly useful was that there was a disconnect between
the egocentric and exocentric visual displays with respect to style and structure. It was difficult for some to switch
between them in order to acquire necessary information due to the visual disparity. The team used the external
clipboard as a rich source of information, freed from limitations of task-distraction and visual clutter. However, we
saw during testing that richer information – namely, with respect to color and organization – does not inherently
improve information allocation. Rather, we have found that we should strive for a unity between the color-scheme,
organization and functionality of both minimalistic and information-rich displays to better enable concurrent usage.
The aspect of the design that proved unhelpful in the current technological state was the use of fiducial markers
and object recognition for tool and object identification and tracking. During development, the team found the physical
object recognition through image-mapping using the Vuforia framework was much too detailed to consistently
recognize and track objects. This method was abandoned early on in the design, and fiducial markers were employed
following. Fiducial markers are relatively simple for object recognition as they are planar, 2D objects with high-
contrast macro-details. Regardless of the simplicity of the marker, however, the current hardware built into the
HoloLens proved insufficient to pick up small items, and track large objects within a usable depth of field. This is
promising technology for future implementations once hardware advancements have improved. With current
technology however, we have found that this is not a useful aspect for the HUD design. Lessons learned from
preliminary testing will inform AR2 design and facilitate proceeding to human subject testing.
Human-computer interaction technologies used to control increasingly complex systems are evolving, and should
be leveraged for the control of spacesuit systems. A key challenge in using such technologies, however, is the lack of
user interface standards for novel interaction methods provided by system elements like AR. Because spacewalks and
spacesuit simulators do not have the exposure and feedback necessary for traditional UI design (typically involving
thousands, or millions of iterations) this research focuses on broadly exploring some of the most promising ideas for
controlling spacesuit systems, and developing efficient, repeatable methodologies to evaluate user interfaces. Through
our initial human factors evaluations we aim to provide translatable insight as to what methods and modalities are
most appropriate for delivering specific types of information in varying spaceflight operational scenarios.
References
1Chappell, S.P. et al. “Risk of Injury and Compromised Performance Due to EVA Operations.” NASA Evidence Report. HRP
Human Health Countermeasures Element. (2017).
2Thomas, K.S., McMann, H.J. “U.S. Spacesuits”. Springer, New York, (2012). Print.
3NASA. “International Space Station (ISS) EVA Suit Water Intrusion, High Visibility Close Call, IRIS Case Number: S-2013-
199-00005”. NASA Technical Report. (2013)
4Thuot, P.J. “Preliminary Draft Intelsat Section STS-49 Crew Report” NASA Technical Report. (1993).
5Slutsky, David. “The Effective Use of Graphs.” Journal of Wrist Surgery, v.3, n.2, (2014): 67-68.
6Rosenfield, Mark. “Computer vision syndrome (a.k.a. digital eye strain).” Optometry in Practice, v.17. i.1. (2016).
7Majewski, Maciej, and Wojciech Kacalak. "Conceptual design of innovative speech interfaces with augmented reality and
interactive systems for controlling loader cranes." Artificial Intelligence Perspectives in Intelligent Systems. Springer, Cham: 237-
247 (2016).
8Pearl, C. “Designing Voice User Interfaces”, O’Reilly Media Inc., Svastopol, CA. (2016), Print.
9Hegarty, Mary. "The cognitive science of visual-spatial displays: Implications for design." Topics in cognitive science 3.3
(2011): 446-474.
10Bishu, Klute. “The effects of extravehicular activities (EVA) gloves on human performance”. International Journal of
Industrial Ergonomics v.16 (1995): 165-174.
International Conference on Environmental Systems
11
11Lobo-Prat, Joan, et al. “Evaluation of EMG, Force and Joystick as Control Interfaces for Active Arm Supports.” Journal of
NeuroEngineering and Rehabilitation, vol. 11, no. 1, (2014): 68.
12Xu, Lanyi & Adler, A.. “An improved method for muscle activation detection during gait.” Canadian Conference on
Electrical and Computer Engineering. v.1. (2004): 357 - 360 v.1.
13Beacham, Frank. “The Telestrator Celebrates 35 Years at the Super Bowl.” Welcome to The Broadcast Bridge - Connecting
IT to Broadcast, The Broadcast Bridge, (2017).
14Anderson, Daniel, et al. “A First-Person Mentee Second-Person Mentor AR Interface for Surgical Telementoring.” In Adjunct
Proceedings of the IEEE and ACM International Symposium for Mixed and Augmented Reality (2018 - To Appear).
15Ibanez, Alexandro Simonetti, and Joseph Padres Figueras. “Vuforia v1.5 SDK: Analysis and Evaluation of Capabilities.”
Universitat Politecnica De Catalunya, (2013).
16Odom, J. “Proof of Concept: HoloLens App Shrinks a Room Down to a Miniature 3D Map.” Next Reality. (2017).
17 Sim, Cummings, Smith. “Past, present and future applications of human supervisory control in space missions”. Acta
Astronautica, v.62 (2008): 648-655.
18Oulasvirta, Antti. “Interaction in 4-second Bursts: The Fragmented Nature of Attentional Resources in Mobile HCI”, CHI
(2005).
19Butler, Miller, et al. “A formal method approach to the analysis of mode confusion”. 17th AIAA/IEEE Digital Avionics Systems
Conference (1998).
20Marquez. “Human-automation collaboration: decision support for lunar and planetary exploration”, Aeronautics and
Astronautics, MIT (2007).
21Brooke J. SUS – A quick and dirty usability scale? Usability Evaluation in Industry. 1996; 189(194):4-7
22Hart, S.G., Stavenland, L.E., “Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research”.
In Human Mental Workload- edited by P. A. Hancock, N. Meshkati (1988), pp. 139.