Conference PaperPDF Available

A Comparative Ground Study of Prototype Augmented Reality Task Guidance for International Space Station Stowage Operations

Authors:

Abstract and Figures

Astronauts currently require extensive, near-instantaneous guidance and instruction by ground-based crew to efficiently and successfully conduct flight operations. As missions take astronauts farther away from earth and real-time communication between spacecraft and earthbound crew becomes impossible, astronauts will need technology that can help them execute flight operations with limited support. While research has shown that Augmented Reality (AR) can feasibly perform as an aid for completing certain flight tasks, there is little evidence that AR can assist in completing entire flight operations or improve flight performance metrics such as completion time. This work addresses stowage operations to investigate how AR can impact flight performance. During stowage operations, flight crew members transfer cargo items to and from different spacecraft modules. A recent stowage operation aboard the International Space Station (ISS) took 60 hours to complete with real-time ground crew support. The prolonged duration of stowage operations and the necessity for crewmembers to travel significant distances make it an appropriate domain for this investigation. StowageApp is a prototype AR application deployed on Microsoft HoloLens, and developed to assist astronauts in completing stowage operations. This paper describes the design of StowageApp and present the results of a user study comparing its performance to that of the current method of delivering stowage instructions on a handheld tablet device. This within-subject user study was performed in the ISS Node 2 Harmony, Japanese Experiment Module "Kibo," MultiPurpose Logistics Module "Leonardo," and Columbus mockups at National Aeronautics and Space Administration (NASA) Johnson Space Center in Houston, TX, USA. Each participant completed as many of a set of predetermined stowage tasks as they could in two hours in their assigned condition. Task completion time was measured, along with the number of errors attributed to the participant. Participants also completed an unweighted NASA TLX survey and provide their opinions in a free-form exit interview. Results did not reveal significant differences in task completion time, errors committed, or TLX responses between cargo message content conveyed via StowageApp and via electronic document on a tablet handheld device. However, user interviews showed that all but one participant would prefer to use StowageApp over the handheld device.
Content may be subject to copyright.
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 1 of 11
IAC-18-B3.7.6
A Comparative Ground Study of Prototype Augmented Reality Task Guidance for International Space
Station Stowage Operations
Hiroshi Furuyaa, Lui Wangb, Carmine Elvezioa, Steven Feinera
a Department of Computer Science, Columbia University New York, NY, 10027 USA,
{hf2326, ce2236, skf1}@columbia.edu
b National Aeronautics and Space Administration Johnson Space Center ER6 Houston, TX 77058,
lui.wang-1@nasa.gov
Abstract
Astronauts currently require extensive, near-instantaneous guidance and instruction by ground-based crew to
efficiently and successfully conduct flight operations. As missions take astronauts farther away from earth and real-
time communication between spacecraft and earthbound crew becomes impossible, astronauts will need technology
that can help them execute flight operations with limited support. While research has shown that Augmented Reality
(AR) can feasibly perform as an aid for completing certain flight tasks, there is little evidence that AR can assist in
completing entire flight operations or improve flight performance metrics such as completion time. This work
addresses stowage operations to investigate how AR can impact flight performance. During stowage operations,
flight crew members transfer cargo items to and from different spacecraft modules. A recent stowage operation
aboard the International Space Station (ISS) took 60 hours to complete with real-time ground crew support. The
prolonged duration of stowage operations and the necessity for crewmembers to travel significant distances make it
an appropriate domain for this investigation. StowageApp is a prototype AR application deployed on Microsoft
HoloLens, and developed to assist astronauts in completing stowage operations. This paper describes the design of
StowageApp and present the results of a user study comparing its performance to that of the current method of
delivering stowage instructions on a handheld tablet device. This within-subject user study was performed in the ISS
Node 2 Harmony, Japanese Experiment Module “Kibo,” Multi-Purpose Logistics Module “Leonardo,” and
Columbus mockups at National Aeronautics and Space Administration (NASA) Johnson Space Center in Houston,
TX, USA. Each participant completed as many of a set of predetermined stowage tasks as they could in two hours in
their assigned condition. Task completion time was measured, along with the number of errors attributed to the
participant. Participants also completed an unweighted NASA TLX survey and provide their opinions in a free-form
exit interview. Results did not reveal significant differences in task completion time, errors committed, or TLX
responses between cargo message content conveyed via StowageApp and via electronic document on a tablet
handheld device. However, user interviews showed that all but one participant would prefer to use StowageApp over
the handheld device.
Keywords: Augmented reality task guidance; augmented reality space operations; cargo logistics operations;
International Space Station task guidance; human spaceflight electronic procedures; augmented reality inventory and
stowage.
1. Introduction
The per-hour cost of operations for astronaut crew
aboard the International Space Station (ISS) is orders
of magnitude higher than that for ground crew. To
illustrate, a recent letter from former NASA
Administrator Charles Bolden to Congress [2]
disclosed that transportation of astronauts to and from
the ISS would cost $490 million. Since a tour
typically consists of six astronauts with a length of
stay of about six months, this amounts to nearly
$20,000 per hour aboard the ISS per astronaut just for
transportation to and from the ISS. It is clear that
technologies and methods to increase astronaut
efficiency are important. Currently, many tasks, such
as scheduling, are handled by ground teams to
increase the time astronauts can spend conducting
operations, such as research. Despite this effort, crew
is still required to load and unload cargo from visiting
cargo vehicles, a process called stowage operations.
During stowage operations, crew members may be
required to travel back and forth across the ISS and
handle hundreds of items. NASA Inventory and
Stowage Officers have developed a set of documents,
together called a cargo message, to provide detailed
instructions on how to execute these stowage
operations.
The cargo message first and foremost dictates the
actions necessary to maintain the integrity of the
Inventory Management System (i.e., to make sure all
items of their expected quantity are at their expected
locations). stowage operations share many
characteristics with order picking, where warehouse
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 2 of 11
workers must correctly pick and place items from one
location to another. On Earth, order picking has been
identified as the most labor-intensive warehouse
operation [5]. ISOs with whom we are working report
that recent stowage operations aboard the ISS have
taken crew about 60 person-hours to complete,
mirroring the costliness of order-picking operations.
In addition, the mistakes that crew make during
stowage operations, such as picking the wrong
quantity of an item, are similar to the mistakes
frequently seen in order picking. Where extensive
research has been done to introduce new technologies
and techniques to reduce the burden on order pickers
[5], we aim to do so for crew conducting stowage
operations aboard the ISS.
In addition to order-picking challenges, crew are
required to stow cargo items in predetermined order,
location, and sometimes orientation to meet the center
of gravity, mass, and volume restrictions of both the
ISS and the visiting vehicle. Existing literature
recognizes that describing these spatial locations and
actions tends to be fraught with ambiguity and other
difficulties [24]. To address this challenge, ISOs
include step-by-step instructions in the cargo message
to help crew meet these physical stowage
requirements. Furthermore, stowage operations often
involves other forms of procedural instruction. For
example, crew may be required to transfer biohazard
waste cargo, necessitating specific procedures to
minimize the risk of contamination. Therefore, we
find that a cargo message must effectively deliver
procedural task instructions, in addition to order
picking instructions [11].
In space, crew often find themselves with more
simultaneous uses for hands than they can manage
when they conduct stowage operations. This directly
affects crew’s locomotion abilities as they often must
use their hands to move. As a result, ISOs estimate
that each time a crew member must travel to a new
location to pick an item costs ten minutes. To
minimize unnecessary locomotion, ISOs write cargo
messages to encourage crew members to follow a
travel-minimizing path.
Therefore, cargo messages must effectively
deliver three different types of instructions: order
picking, procedural, and pathfinding. ISOs observe
that the current form of cargo messages is adequate,
but they also find that requiring crew to hold the
cargo message, whether a stack of papers or an
electronic document displayed on a handheld device,
exacerbates challenges crew face in microgravity
where there are often many competing uses for their
hands. Sometimes, a crew member recruits another to
hold the cargo message and read aloud the
instructions instead of viewing the cargo message
him/herself.
In this work, we apply AR to replace the cargo
message. In particular, we focus on helping crew to
identify cargo items and quantities, comprehend
procedural instructions, and direct their attention
towards specified locations aboard their spacecraft.
To borrow the nomenclature of Neumann and
Majoros [21], we use AR to assist crew in the
informational or cognitive phase of a cargo task, in
contrast to the psychomotor phase, where crew
perform physical manipulations of cargo items.
We present an AR research prototype,
StowageApp, that utilizes 2D visual cues situated in
3D space, 3D visual cues placed according to the
physical scene (in-situ visualizations), and head-gaze-
based interactions to enable a comprehensive, hands-
free delivery of a cargo message to a crew member.
In this paper we also report the results of a pilot user
study comparing StowageApp to the current form of
cargo messages displayed on a handheld tablet device
for a mock stowage operation conducted in the full-
scale ISS mockup in NASA Johnson Space Center
(JSC) in Houston, TX. We also discuss some of the
challenges we encountered in using StowageApp for
a simulated spaceflight operation.
Our contributions are as follows: We develop
StowageApp, an AR research prototype designed to
complement or replace the current form of cargo
messages used in ISS stowage operations, a type of
operation where crew must comprehend and act on
many different types of instructions. We compare
StowageApp to the current form of cargo message for
a simulated stowage operation in the full-scale ISS
mockup on speed of performance, accuracy of item
transfer and procedure execution, ease of use, and
user preference. Our user study did not show any
significant difference in the completion time, error
rates, or TLX survey [13] results between the two
methods of delivering cargo messages. On the other
hand, when asked, study participants generally
preferred StowageApp over the handheld display.
2. Related work
Due to the nature of cargo messages, StowageApp
overlaps with previous work in AR order picking,
electronic procedures, AR task guidance, placement
of 2D windows in 3D space, pathfinding, and AR
interactions for spaceflight-specific activities.
De Koster’s discussion of low-level, picker-to-
parts order picking formally identifies many problems
and techniques that ISOs have encountered in
planning and executing stowage operations with crew
aboard the ISS, including the reduction of travel time
to an increasing function of travel distance,
implementation of zone-picking techniques,
maximizing the use of space, and the utilization of
heuristically planned paths instead of analytically
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 3 of 11
optimal ones [5]. Previous attempts at developing
head-worn display interfaces have not resulted in
implementations that are superior to traditional paper-
based order picking aids in warehouse scenarios [10,
23], although laboratory heads-up display prototypes
have shown promise in out-performing current
techniques for order picking [12].
At NASA, Kortenkamp et al. [18] presented a
procedure representation language demonstrating
advances in electronic procedures for human
spaceflight operations. Tversky et al. [26] showed
that interactive animated graphics can be more
effective than comparable static graphics in
conveying complex system information. Other work
has shown that although pictorial cues are important
in achieving fast informational comprehension, text
instructions are still important for accurately
conveying procedural information [3].
There is also a large body of work exploring AR
as a vehicle for procedural guidance and instruction.
Neumann and Majoros [21] presented and classified
distinct cognitive areas that AR media can
complement in manufacturing and maintenance tasks,
which was corroborated by others [22]. Henderson
and Feiner demonstrated AR prototypes that address
these cognitive processes in armored vehicle
maintenance [14] and in aircraft engine combustion
chamber assembly [15], showcasing the potential
benefits for AR applications in procedural task
guidance. Tang [25] demonstrated the utility of AR as
an instructional medium relative to printed media in
an object assembly task.
The concept of 2D windows in the 3D world has
been well studied. Feiner and colleagues [8]
demonstrated positioning of 2D windows relative to
the user’s head, their body, and objects in the
surrounding world, while Billinghurst and colleagues
[1] showed that spatialization of information on a
wearable display increases search time performance..
Many other works have expanded on these ideas,
applying them to many different use cases and forms
[7].
Likewise, there is much existing work on
providing AR pathfinding cues. Höllerer
demonstrated indoor navigation using WIMs, arrows
placed in the indoor environment [16]. Wagner and
Schmalstieg [28] demonstrated 3D signpost
techniques in addition to 2D overview maps in AR.
Butz [4] investigated many other types of navigation
aid techniques, including animated walkthroughs and
landmark presenting. Other work demonstrates path
planning for virtual pathfinding based on Dijkstra’s
algorithm [17]. Foyle [9] demonstrated techniques
placing critical but non-pathfinding visualizations
away from pathfinding visualizations to reduce
cognitive tunneling.
In recent years, there have been increasing efforts
to develop AR applications for human spaceflight.
Markov-Vetter and colleagues [20] performed a study
suggesting that a virtual keyboard displayed fixed to a
real surface is easier to accurately interact with in 0-g
environments. While Markov-Vetter and colleagues
[19] also presented an AR prototype procedure
guidance system for operating ISS payload racks,
they were unable to show improvements in
performance relative to a PDF document displaying
the procedural instructions. Vovk and colleagues [27]
showed that users performing a spaceflight task with
AR guidance displayed on a Microsoft HoloLens did
not cause users simulator sickness.
3. StowageApp
We created a prototype, StowageApp, to
investigate the application of AR to a realistic human
spaceflight operation. StowageApp delivers content
based on cargo messages that ISOs wrote for this
work. We tested StowageApp with users performing
mock stowage operations, as dictated by these cargo
messages in the full-scale ISS mockup at NASA JSC.
We discuss StowageApp as an assembly of four parts:
a 2D window that lives in the 3D world, or the “main
panel”; the order picking and procedural information
representation; a heads-up display that complements
the main panel; and in-situ pathfinding and
localization visualizations. Figs. 1 and 2 show some
of each of these elements.
Figure 1. Frontal view of StowageApp. (Screenshot from
StowageApp.)
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 4 of 11
Figure 2. View of StowageApp in Kibo module in full-scale
ISS mockup at JSC. (Screenshot from StowageApp.)
3.1 Main panel
The main panel is a 2D window living in 3D
space whose position and orientation are mostly
defined egocentrically. The panel is billboarded to the
user; that is, the panel always rotates in-place to face
the user. The panel also maintains its position within
a range of distance to the user. If the user approaches
the panel closer than the lower end of the range, the
panel moves away from user. Likewise, if the user
moves farther from the main panel than the higher
end of the range, the panel will approach the user
until its distance to the user is once again within the
set range, as illustrated in Fig. 3. Pilot testing in the
full-scale ISS mockup showed that the far distance
should be adjusted to meet the user’s visual acuity
such that the text is never too small to read, while the
near distance should be adjusted such that the panel is
never significantly larger than the field of view of the
display.
The panel exhibits similar behavior for rotation in
the user’s horizontal plane within an angle range
instead of a distance range. When this range is set to
be larger than the field of view of HoloLens, this
angle-keeping behavior will cause the main panel to
appear to inhabit a region just out of view. As a
result, the main panel will appear to “fall behind” out
of view while the user walks, but be readily
accessible by a slight turn of the head. In
StowageApp, the main panel stays within 60° of the
center of the user’s field of view, compared to the
approximately 30° field of view of the HoloLens.
Previous iterations of the main panel design
included different behaviors such as the main panel
adhering to physical surfaces and keeping only
distance and not rotational constraints relative to the
user. Feedback from ISO testers in the design phase
indicated that the current behavior of the panel is
preferable to these previous behaviors, especially
after a user moves between locations.
While the primary mode of interaction with 2D UI
elements such as buttons is the HoloLens “air-tap,”
we also included a head-gaze interaction for
registering input when the user has their hands full.
When a user focuses on a selectable UI element, a
torus fills up around the cursor, with the input click
registering when the torus is completely full (see Fig.
4).
Figure 4. Hands-free head-gaze–based input registration
timer just after activation (Screenshot from StowageApp.)
3.2 Procedure Representation
Cargo messages are typically represented using a
set of three documents, a step-by-step procedure list
called the choreography, a spreadsheet of detailed
instructions and order-picking information, and a
photos document. Each step in the procedure list
refers to specific cells in the spreadsheet, while
photos would be of specific items in the spreadsheet.
An early iteration of StowageApp attempted to
organize the spreadsheet according to the order
specified in the procedures document, and display the
spreadsheet as a paginated table format on the main
panel. ISO testers found this translation of the cargo
message to AR to be confusing and had trouble
understanding the instructions. This inspired the
development of a new method for displaying cargo
message content that we used in StowageApp.
3.2.1 Intent of cargo message instructions
Each item that appears in a cargo message has the
following information associated with it: identifying
data (barcode, serial number, part name, matching
image in separate document, etc.), current location
code (identifying a particular location aboard the ISS
or visiting vehicle), destination location code (the
location aboard the ISS or visiting vehicle to where
Figure 3. Distance-
keeping behavior of main panel.
(Screenshot from StowageApp.)
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 5 of 11
the item will be moved), and additional procedural
instructions (e.g., special handling instructions). Bold,
all-caps action words are used to standardize the
description of these procedures. Crew members are to
find the specified item by its identifying information
at its current location and perform the indicated
procedural instructions before placing the item at its
designated destination location. While identifying
information and location information is contained
within the spreadsheet document, procedure
information may be contained in both the
choreography and the spreadsheet. As a result, crew
is asked to first read the choreography to understand
higher-level procedural instructions (e.g., “each of the
following items must be photographed before
packing”), consult the spreadsheet to find the relevant
items, read the spreadsheet rows to perform item-
specific procedures (e.g., pack this item in two layers
of bubble wrap), and finally place the items in their
indicated destination locations.
3.2.2 StowageApp representation
The current form of cargo message thus requires
the crew to mentally combine the order picking and
procedural instructions before acting on them.
StowageApp represents both the order picking
instructions and the procedural instructions together
as a serial execution of procedural instructions.
Where the cargo message implicitly instructs crew
members to place items into compartments or cargo
transfer bags (CTBs) in the ISS or visiting vehicle,
StowageApp assigns such instructions an action word
like the other procedural instructions in the cargo
message. We were able to categorize these action
words based on where in the serial execution of
stowage operation actions they could occur. Fig. 5
shows a simple flowchart showing the interaction
between different categories of procedural action
words for a stowage operation primarily targeting the
packing of CTBs.
Figure 5. Flowchart of action word categories for simple
pack stowage operation.
For example, if an item were to be packed into a
bubble bag, retrieval of the specific bubble bag would
be a “preparatory action,” the picking of the item and
placement into the bubble bag would both be
“procedure actions,” and the placement of the bubble
bag that now contains the item of interest into its final
destination inside of a particular CTB would be “item
placement.” Multiple items could be included in a
specific action provided they shared the parameters
for that action (i.e., the action is the same and the
items are co-located, require the same tools for
procedural execution, and share a destination
location). StowageApp displays these actions one at a
time, with the action word and identifying
information of the items displayed in text on the main
panel, as shown in Fig. 6.
Figure 6. Views of main panel. (Top image from desktop
simulation of StowageApp for clarity, bottom screenshot
from StowageApp.)
Fig 6 also shows that StowageApp displays
images of the items specified in the procedure.
StowageApp preserves the overall order of
procedure execution in a cargo message such that the
path-minimization decisions made by the ISO authors
are still expressed.
3.3 Heads-up Display
When crew transition into the psychomotor phase
of a stowage task, they stop actively referring to the
cargo message. In this stage, the main panel is likely
out of view and users are unable to refer to it without
removing their gaze from the task at hand. We have
found during testing, however, that users frequently
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 6 of 11
do exactly that, as many details such as item part
numbers are difficult to remember exactly. We
implemented a simple head-fixed panel, the HUD,
that displays textual information on the fringes of the
viewing area that users can refer to at a glance. The
HUD displays a subset of the information displayed
on the main panel, with the intention of providing a
user only the information that they need at that very
moment. The information on the HUD updates as the
user progresses through the procedure. The specific
data that is displayed on the HUD were chosen based
on ISO feedback on is minimally needed to complete
a stowage operation task. The HUD also displays the
current procedure action step as well as the previous
and next action steps. The choice to display these data
is based on feedback from ISOs during testing, who
indicated a desire to provide users a higher level
understanding of the procedures they are completing.
Users can toggle the HUD on and off using a voice
command. Figs. 1, 2, and 7 show the HUD.
Figure 7. Views of HUD. (Image from desktop simulation
of StowageApp for clarity, bottom screenshot from
StwoageApp.)
3.4 Pathfinding and localization visualizations
Item locations are described with compartment
location codes, which while easy to learn, require
some training and experience to become proficient in
following. ISOs report that occasionally an astronaut
will find themself in the wrong module aboard the
ISS because of a mistake in interpreting a location
code. Location codes are also difficult to follow in
visiting vehicles, where astronauts do not benefit
from having constant exposure to the location coding
of the ISS. Furthermore, some visiting vehicles may
have compartments that are not outwardly visible.
StowageApp provides a navigation aid in the form of
a solid line tracing the path from the user’s current
location to the location where the next cargo
procedure task takes place. A 3D conical arrow
traverses the path to indicate direction. The path from
the user to the compartment location is calculated
using Dijkstra’s algorithm. StowageApp also
highlights the compartment location with an outlined
cube and a text label attached to the compartment.
Registration of the physical locations in the ISS
mockup was performed manually. StowageApp
contains an unrendered model of the ISS mockup
facility. When first running StowageApp, the user can
choose to identify points on the spatial map
constructed by the HoloLens that correspond to fixed
points on the model, aligning the model with the
physical world as mapped by the HoloLens. Path and
compartment visualizations are registered relative to
this model. Fig. 8 shows an example of these
visualizations.
Figure 8. View of pathfinding and compartment localization
visualizations. (Screenshot from StowageApp.)
3.5 Voice Commands
StowageApp contains support for voice
commands for interacting with the procedure and for
toggling any of the visualizations on or off. In our
testing, however, we found that the HoloLens was not
able to consistently recognize the voice commands in
the ISS mockup modules. Therefore, we excluded
voice commands from the user evaluations.
4. Implementation
We implemented StowageApp in Unity 3D.
StowageApp also makes extensive use of the
MercuryMessaging framework [6] in Unity 3D for
communication between components that are not
related to each other in the scene spatial hierarchy
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 7 of 11
(e.g., the HUD and the main panel). StowageApp is
designed to be deployed on a Microsoft HoloLens, as
that is currently the only AR head-worn display
aboard the ISS.
5. Previous Pilot study
Prior to conducting the final pilot user study, we
performed a between-subject pilot study with 25
individuals including several ISOs. Each session was
45 minutes long and we asked participants to pack as
many CTBs as they could in that time according to a
cargo message we provided. The cargo message was
written by non-participant ISOs. We observed a few
participants attempt to complete the operation in an
order different than what was dictated by the cargo
message, which is expected based on problems that
have been documented in order-picking optimization
[5]. In the final user study, we instructed users to
follow the order dictated by the cargo message to
eliminate experimental effects due to alternate
stowage operation strategies. Pilot study feedback
also influenced fine tuning of main panel behaviors,
and the inclusion of the pathfinding visualization and
location code text labels. Pilot study results of ISOs
didn’t appear to be significantly different than those
of non-ISOs, contrary to expectations. Based on this
pilot study, the ISOs decided that they wanted to have
longer user study sessions of at least two hours,
leading to changes to the final pilot user study.
6. User study
We conducted a pilot user study to compare the
performance of StowageApp and a handheld tablet
device displaying a cargo message. The cargo
messages we used for this user study were written by
ISOs. We used real CTBs, but had to use laminated
photos as stand-ins for spaceflight cargo items. The
ISOs designed the cargo message to take two hours to
complete. We chose to test with ISO-authored cargo
messages in the full-scale ISS mockup to achieve as
authentic a testing environment as possible given our
resources.
6.1 Hypotheses
Prior to the experiment, we formulated the
following hypotheses:
H1. StowageApp will be faster than the current
modality for delivering cargo messages.
H2. StowageApp will be more accurate than the
current modality for delivering cargo messages.
H3. Participants will find StowageApp to be the
most comfortable to use.
H4. Participants will prefer using StowageApp.
We based these hypotheses on the belief that our
methods for displaying cargo messages would allow
the benefits of freeing up hands to positively impact
task completion times. Furthermore, we believed that
our procedure representation design would make it
less likely that a user would inadvertently skip
procedure steps or misread the instructions.
6.2 Methods
6.2.1 Participants
Due to limitations in availability for blocks of at
least two hours in the full-scale ISS mockup at NASA
JSC, we were only able to include nine participants in
our user study, whom we recruited from the NASA
JSC community, including ISOs, NASA engineers
and other employees, and NASA interns. One
participant had seen a demo of an early version of
StowageApp, but was not familiar with the
techniques used in the version of StowageApp with
which we conducted the user study. This participant
had previous experience using a HoloLens, but stated
that they still did not feel familiar with the device.
Other participants had no previous experience with
AR.
6.2.2 Location and Equipment
The full-scale ISS mockup in the Space Vehicle
Mockup Facility at NASA JSC includes mockups of
the modules currently part of the ISS. For our user
study, we had participants complete stowage
operations tasks distributed between four of these
module mockups: Node 2 Harmony, Japanese
Experiment Module “Kibo,” Multi-Purpose Logistics
Module “Leonardo,” and Columbus. We used flight-
like CTBs for all tasks. We were unable to procure a
sufficient quantity of flight-like hardware for cargo
items, instead choosing to laminate photos of these
items with item ID labels on the back. StowageApp
was deployed to a Microsoft HoloLens optical see-
through head-worn display. Cargo messages were
displayed on an Apple iPad. Both sets of display
devices are identical to devices that astronauts use
aboard the ISS.
6.2.3 Design
We performed a within-subject user study
consisting of two conditions (StowageApp, Tablet).
Two hour sessions were blocked by condition, with a
7–10 day break between sessions. Block order was
counterbalanced across both conditions.
During each session, participants completed a
stowage operation based on a cargo message written
by ISOs to take two hours to complete. This cargo
message simulated a cargo pack operation, in which a
crew member would be asked to pack items
distributed throughout the four modules into a series
of CTBs and place the CTBs into final stowage
locations in the Multi-Purpose Logistics Module,
“Leonardo.” The cargo message instructs participants
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 8 of 11
to pack a total of eight CTBs. Participants were
instructed to pack them one at a time, in the order
presented in the cargo message to eliminate effects of
participants choosing different order-picking or
pathfinding strategies, which we observed during our
pilot testing.
6.2.4 Procedure
Study coordinators welcomed the participants and
took them through a familiarization briefing on the
goals of stowage operations, the layout of the ISS
mockup and the modules, and instructions on how to
read and interact with their condition device. The
participants were allowed time to try out their
devices.
At the start of each session, the participants were
asked to follow the instructions displayed on their
device. Participants were instructed to follow the
order of operations presented on their device exactly.
We measured the time to pack and place each of the
eight CTBs referenced in the cargo message.
7. Results
We analyzed average time to pack a CTB by the
method used to present the cargo message. We
identified the time to pack the first CTB in each of the
2 (cargo message display method) × 9 (participants)
sessions as an outlier, due to an error in the procedure
resulting in each user walking extra distance that they
were not required to walk in any other part of the
stowage operation. We also observed that despite
undergoing the familiarization procedure, users
would frequently ask questions during the packing of
the first CTB that were not directly related to the
cargo message, such as how to open compartment
doors.
We evaluated each hypothesis for significance
with an α of 0.05.
7.1 H1: Speed of stowage operation completion
An F-test for equality of variances showed that
the variances for each method are different (Fcrit =
0.291, F > 0.295). While fig. 9 shows that the mean
CTB packing task completion time for the handheld
devices is lower than for StowageApp, a two sample
two-tail t-test assuming unequal variances was unable
to show that this difference is significant (p > 0.08).
Because the F-test result was close to the threshold
value, we also conducted a two-tail t-test assuming
equal variance, which did not produce a significantly
different result (p > 0.075). Furthermore, a Friedman
test did not show that users significantly tended to be
faster with one method than another (p > 0.31).
Therefore, we cannot accept H1.
Figure 9. Mean time to pack one CTB in seconds by
method (Errors bars are 95% confidence intervals.)
7.2 H2: Accuracy of stowage operation completion
Four out of nine participants using the handheld
tablet device committed an error that resulted in at
least one item being stowed in an incorrect location.
Of those four, two committed multiple such errors.
One participant committed one such error while using
StowageApp. Fig. 10 shows the average errors
committed over all participants. An F-test for equality
of variances showed that the variances for each
method are significantly different (p = 0.007), but a
two-tail t-test did not show a significant difference in
the number of errors committed between the handheld
tablet device and StowageApp (p > 0.09). Therefore,
we cannot accept H2.
Figure 10. Mean errors committed by method (Errors bars
are 95% confidence intervals.)
7.3 H3 and H4: NASA-TLX results and user
preference
Fig. 11 shows the results of the NASA TLX
survey. Wilcoxon signed rank tests were applied to
the unweighted NASA-TLX survey responses. For
each category, test results did not show either method
to produce significantly different perceptions of
workload in any category (p = 0.953 for mental
demand, 0.107 for physical demand, 0.678 for
temporal demand, 0.672 for performance, 0.153 for
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 9 of 11
effort, 0.553 for frustration). Therefore, we cannot
accept H3.
Eight of nine users expressed preference for
StowageApp. A Wilcoxon signed rank test confirms
that this is significant (p = 0.02), validating H4.
Figure 11. Unweighted NASA TLX survey (Errors bars are
95% confidence intervals.)
8. Discussion
8.1 Limited sample size
While most of the results were not significant, a
formal user study with a larger sample size would
likely yield more significant results. In particular,
regarding H2, we observed during the study that users
who committed errors with the handheld tablet device
would skip details or misread the spreadsheet rows.
These are errors did not occur with StowageApp,
which we believe to be due to a reduction in the
amount of information StowageApp displays at one
time compared to the handheld tablet device.
8.2 Accounting for errors in completion time
An incorrectly executed stowage operation may
require correction, adding to the overall completion
time of the operation. In this pilot study, we did not
simulate the amount of time it would take to rectify
errors that we observed. Aboard the ISS, errors such
as these may go unnoticed until the items concerned
are needed once again. Once the error has been
discovered, whether additional time is allocated to
address it depends on the situation; sometimes the
error is noted but the item is not retrieved, while in
other instances crew must correct the error, adding to
task completion time. If a crew member must search
for a misplaced item, the ISOs with whom we are
working estimate that the process can take crew more
than 30 minutes, in addition to the time that ISOs take
on the ground to narrow down suspected item
locations before delivering those instructions to crew.
Other times, an item that was missed is simply moved
to the next stowage operation, where it must often
replace another item, which may then itself be pushed
to a later stowage operation. Although it is clear that
errors may cause a dramatic delay in stowage
operations, it would not be appropriate to assume that
every error we observe in our study should be
categorized this way. For our purposes, we can say
only that StowageApp appears to trend towards
reducing the number of errors a crew member may
commit during a stowage operation.
8.3 Hands-free simulation
This pilot study did not include attempts to
emulate certain realities observed in space that
inspired the creation of StowageApp. Perhaps most
significant among them are the difficulties crew
encounter in holding multiple items at once. During
our user study, we observed in the tablet sessions that
every participant would read the spreadsheet on the
handheld tablet device’s screen and then put down the
device in order to free up their hands to open a CTB.
They would place the device next to the CTB to refer
to it as they sorted through the contents of the CTB.
Such behavior would not be effective in space, as an
open CTB may lead to items floating out of it and a
tablet cannot simply be “put down”—crew often use
hook-and-loop fasteners to attach devices to their
flight suits if they want to free their hands. In the ISS,
CTBs usually are stored inside of compartments that
must be opened to access the CTBs. In the full-scale
ISS mockup, not all compartment locations include
compartment hardware, instead substituting a panel
with a full-scale photo of a compartment. In these
situations in the pilot study, the CTB that would be
stored in the compartment is placed on the floor near
the compartment location instead. In the tablet
sessions, seven out of nine participants looked for
specific CTBs by directly looking at the CTBs on the
floor instead of searching for the compartment
location first, as is intended and realistic. This
behavior was not observed in StowageApp sessions.
These shortcuts that participants took appear to
disproportionately benefit completion times for
handheld tablet device sessions.
The formal user study to follow this pilot study
will include rules to curtail these shortcuts in favor of
better simulating the realities of space. For example,
participants will be instructed to turn the tablet face-
down if they put it down so they can’t view the screen
in a way that wouldn’t be practical in space. We
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 10 of 11
expect this to be impactful as every participant
commented that in light of the hands-free
functionality of StowageApp, it was annoying or
tiring to have to repeatedly pick up and put down the
handheld tablet device. CTBs that cannot be placed
inside of real compartment hardware will be covered
up such that participants cannot read the identifying
information on the CTBs without first finding the
appropriate compartment location. This will force
participants to interact more with the cargo message
location codes and may potentially make it more
difficult for participants who may otherwise avoid
interpreting them.
9. Conclusions and Future Work
We presented StowageApp, a novel AR prototype
deployed to the Microsoft HoloLens and designed to
assist astronauts in completing stowage operations
aboard the ISS. We developed StowageApp in
collaboration with ISO subject matter experts in order
to produce an interface that captures the intent of
stowage operation instructions and allows crew to
interact with them hands-free. StowageApp uses
techniques developed on the ground in domains
including order-picking, electronic procedures, AR
task guidance, 2D windows in 3D space, and
pathfinding, and applies them to stowage operations,
a human spaceflight activity. We evaluated the
performance of StowageApp relative to stowage
operation instructions delivered on a handheld tablet
device in a ground-based pilot user study run in the
full-scale ISS mockup at NASA Johnson Space
Center in which participants completed mock
stowage operations. While we were unable to show
significant performance benefits of StowageApp over
handheld tablet devices, StowageApp indeed helps
users to complete mock stowage operations without
significantly slowing them down. This is despite the
fact that all participants but one have never used the
HoloLens while all participants were very familiar
with handheld tablet devices like the one used in the
comparison condition.
We plan to perform a formal user study that will
incorporate changes that we hope will better reveal
the performance differences between the two
methods. These changes include increasing the
number of participants and mandating that the tablet
devices be used in ways that better match how they
would have to be used on the ISS. We hope that this
formal user study will better show whether AR
technology can help solve some of the complex
challenges of human space exploration.
Acknowledgements
This research was funded in part by NASA Space
Technology Research Fellowship Grant
NNX16AM79H and NSF Grant IIS-1514429.
References
[1] M. Billinghurst, J. Bowskill, N. Dyer, and J.
Morphett, “An evaluation of wearable
information spaces,” in Proceedings. IEEE 1998
Virtual Reality Annual International Symposium
(Cat. No.98CB36180), 1998, pp. 20–27.
[2] C. Bolden, “Letter Notifying Congress about
Space Station Contract Modification with
Russia,” 05-Aug-2015.
[3] H. R. Booher, “Relative Comprehensibility of
Pictorial Information and Printed Words in
Proceduralized Instructions,” Hum Factors, vol.
17, no. 3, pp. 266–277, Jun. 1975.
[4] A. Butz, J. Baus, A. Krüger, and M. Lohse, “A
Hybrid Indoor Navigation System,” in
Proceedings of the 6th International Conference
on Intelligent User Interfaces, New York, NY,
USA, 2001, pp. 25–32.
[5] R. de Koster, T. Le-Duc, and K. J. Roodbergen,
“Design and control of warehouse order picking:
A literature review,” European Journal of
Operational Research, vol. 182, no. 2, pp. 481–
501, Oct. 2007.
[6] C. Elvezio, M. Sukan, and S. Feiner, “Mercury: A
Messaging Framework for Modular UI
Components,” in Proceedings of the 2018 CHI
Conference, Montreal, Canada, 2018.
[7] B. Ens, J. D. Hincapié-Ramos, and P. Irani,
“Ethereal Planes: A Design Framework for 2D
Information Space in 3D Mixed Reality
Environments,” in Proceedings of the 2nd ACM
Symposium on Spatial User Interaction, New
York, NY, USA, 2014, pp. 2–12.
[8] S. Feiner, B. MacIntyre, M. Haupt, and E.
Solomon, “Windows on the World: 2D Windows
for 3D Augmented Reality,” in Proceedings of
the 6th Annual ACM Symposium on User
Interface Software and Technology, New York,
NY, USA, 1993, pp. 145–155.
[9] D. C. Foyle, A. D. Andre, and B. L. Hooey,
“Situation Awareness in an Augmented Reality
Cockpit: Design, Viewpoints and Cognitive
Glue,” in In Proceedings of the 11th
International Conference on Human Computer
Interaction. Las Vegas, NV, 2005.
[10] M. Funk, S. Mayer, M. Nistor, and A. Schmidt,
“Mobile In-Situ Pick-by-Vision: Order Picking
Support Using a Projector Helmet,” in
Proceedings of the 9th ACM International
Conference on Pervasive Technologies Related
to Assistive Environments, New York, NY, USA,
2016, pp. 45:1–45:4.
69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018.
Copyright ©2018 by the International Astronautical Federation (IAF). All rights reserved.
IAC-18-F1.2.3 Page 11 of 11
[11] R. M. Gagne and W. D. Rohwer, “Instructional
Psychology,” Annu. Rev. Psychol., vol. 20, no. 1,
pp. 381–418, Jan. 1969.
[12] A. Guo et al., “A Comparison of Order Picking
Assisted by Head-up Display (HUD), Cart-
mounted Display (CMD), Light, and Paper Pick
List,” in Proceedings of the 2014 ACM
International Symposium on Wearable
Computers, New York, NY, USA, 2014, pp. 71–
78.
[13] S. G. Hart and L. E. Staveland, “Development of
NASA-TLX (Task Load Index): Results of
Empirical and Theoretical Research,” in
Advances in Psychology, vol. 52, P. A. Hancock
and N. Meshkati, Eds. North-Holland, 1988, pp.
139–183.
[14] S. Henderson and S. Feiner, “Exploring the
Benefits of Augmented Reality Documentation
for Maintenance and Repair,” IEEE Transactions
on Visualization and Computer Graphics, vol.
17, no. 10, pp. 1355–1368, Oct. 2011.
[15] S. J. Henderson and S. K. Feiner, “Augmented
reality in the psychomotor phase of a procedural
task,” in 2011 10th IEEE International
Symposium on Mixed and Augmented Reality,
2011, pp. 191–200.
[16] T. Höllerer, D. Hallaway, N. Tinna, and S.
Feiner, “Steps Toward Accommodating Variable
Position Tracking Accuracy in a Mobile
Augmented Reality System,” in In Proc.
AIMS ’01, 2001, pp. 31–37.
[17] B. Huang and N. Liu, “Mobile Navigation Guide
for the Visually Disabled,” Transportation
Research Record: Journal of the Transportation
Research Board, vol. 1885, pp. 28–34, Jan. 2004.
[18] D. Kortenkamp, R. P. Bonasso, D.
Schreckenghost, K. M. Dalal, V. Verma, and L.
Wang, “A Procedure Representation Language
for Human Space Flight Operations,” presented
at the 9th International Symposium on Artificial
Intelligence, Robotics, and Automation for
Space, Los Angeles, CA, 2008.
[19] D. Markov-Vetter and O. Staadt, “A pilot study
for Augmented Reality supported procedure
guidance to operate payload racks on-board the
International Space Station,” in 2013 IEEE
International Symposium on Mixed and
Augmented Reality (ISMAR), 2013, pp. 1–6.
[20] D. Markov-Vetter, E. Moll, and O. Staadt,
“Evaluation of 3D Selection Tasks in Parabolic
Flight Conditions: Pointing Task in Augmented
Reality User Interfaces,” in Proceedings of the
11th ACM SIGGRAPH International Conference
on Virtual-Reality Continuum and Its
Applications in Industry, New York, NY, USA,
2012, pp. 287–294.
[21] U. Neumann and A. Majoros, “Cognitive,
performance, and systems issues for augmented
reality applications in manufacturing and
maintenance,” in Proceedings. IEEE 1998
Virtual Reality Annual International Symposium
(Cat. No.98CB36180), 1998, pp. 4–11.
[22] M. Richardson, G. Jones, and M. Torrance,
“Identifying the task variables that influence
perceived object assembly complexity,”
Ergonomics, vol. 47, no. 9, pp. 945–964, Jul.
2004.
[23] B. Schwerdtfeger et al., “Pick-by-Vision: A first
stress test,” in 2009 8th IEEE International
Symposium on Mixed and Augmented Reality,
2009, pp. 115–124.
[24] L. Talmy, Toward a Cognitive Semantics. .
[25] A. Tang, C. Owen, F. Biocca, and W. Mou,
“Comparative Effectiveness of Augmented
Reality in Object Assembly,” in Proceedings of
the SIGCHI Conference on Human Factors in
Computing Systems, New York, NY, USA, 2003,
pp. 73–80.
[26] B. Tversky, J. B. Morrison, and M. Betrancourt,
“Animation: Can it facilitate?,” International
Journal of Human-Computer Studies, vol. 57, no.
4, pp. 247–262, 2002.
[27] A. Vovk, F. Wild, W. Guest, and T. Kuula,
“Simulator Sickness in Augmented Reality
Training Using the Microsoft HoloLens,” in
Proceedings of the 2018 CHI Conference on
Human Factors in Computing Systems, New
York, NY, USA, 2018, pp. 209:1–209:9.
[28] D. Wagner and D. Schmalstieg, “First Steps
Towards Handheld Augmented Reality,” in
Proceedings of the 7th IEEE International
Symposium on Wearable Computers,
Washington, DC, USA, 2003, pp. 127–.
... Next to AR studies focusing on ISS operations, other spacerelated AR applications have been investigated. WEKIT and OnSight looked at AR for rover operations (Ramsey, 2015;Ravagnolo et al., 2019aRavagnolo et al., , 2019b, Furuya et al. (2018) explored the benefits of AR for stowage operations and logistics, and Karasinski et al. (2017) investigated the use of a tool combining AR and the IoT for just-in-time training. Lastly, a study called Holo-SEXTANT entailed the development of an AR tool aimed at helping extravehicular (EV) crewmembers while navigating a planned traverse. ...
... Experts commented that the application is ''easier and fas-ter" to use compared to conventional media (e.g., tablets). This is aligned with the common perception of users that hands-free displays are preferable over handheld devices due to the user's freedom to use both hands for a task while having the required information (Alarcon et al., 2020;Furuya et al., 2018). Further details on different aspects of the user interface are listed below. ...
... For AR interfaces, displaying solely necessary information in the user's field of view is a very important aspect, with both astronauts and support engineers mentioning that one needs to avoid cluttering the user's view, hence impairing situational awareness and affecting safety, similarly to what has been studied in Anandapadmanaban et al. (2018), Furuya et al. (2018), Karasinski et al. (2017). The participants suggested optimizing the amount and type of information displayed by the AR-IoT tool. ...
Article
Full-text available
Humans are embarking on a new era of space exploration with the plan of sending crewed spacecraft to the Moon, Mars, and beyond. Extravehicular activities (EVAs) will be an essential part of the scientific activities to be carried out in these missions, and they will involve extensive geological fieldwork. EVAs entail many challenges as real-time support from ground control cannot be provided to astronauts. Hence, new human-machine interfaces are urgently needed to enhance mission autonomy for astronauts and reduce ground communication dependability for real-time operations. This study introduces an Augmented Reality (AR) Internet of Things tool for astronauts to carry out geological activities. It proposes a theoretically-informed user-centred design method supported by expert feedback and an evaluation method. The tool was assessed via questionnaires and semi-structured interviews with European Space Agency (ESA) astronauts and geological field activities experts. Content analysis of the interviews revealed that user satisfaction was the first most mentioned (32% of 139 quotes) usability aspect. Key design factors identified were: displaying solely important information in the field of view while adjusting it to the user's visual acuity, easy usage, extensibility, and simplicity. User interaction was the second most mentioned (24% of 139 quotes) usability aspect, with voice seen as the most intuitive input. Finally, this research highlights important factors determining the usability and operational feasibility of an AR tool for analogue training missions and provides a foundation for future design iterations and an eventual integration of AR into the spacesuit’s visor.
... AR for rover operations was investigated by projects such as WEKIT [139], in fact this project aimed at different use cases, and OnSight [175]. Whereas Furuya et al. [79] investigated the benefits of AR for stowage operations and logistics, the study by Karasinski et al. [99] investigated the use of a tool combining AR and the Internet-of-Things (IoT) for just-in-time training. Finally, an interesting study involved the development of an AR tool, called Holo-SEXTANT, aimed at helping extravehicular (EV) crewmembers while navigating a planned traverse during their extravehicular activities. ...
... Hands-free Voice Button/clicker Double confirmation Interaction [15], [55], [79], [125], [126], [146], [198], [221] [3], [55], [126], [139], [146], [154], [230] newly created newly created [96], [173] 8. Implementation aspects ...
... speech recognition, as well as with hand gestures e.g., via a mechanical interface such as a button/clicker. These different modes of interaction have been investigated in some space-related AR studies as well [79], [123], [125], [139], [146], [198] [221], [230]. Hands-free interaction has been highly recommended by Apollo astronauts as well [154]. ...
Thesis
Full-text available
Humans are embarking on a new era of space exploration with the plan of sending crewed spacecraft to the Moon, Mars and beyond. NASA is committed to land astronauts on the Moon by 2024. Extravehicular activities (EVAs) will involve extensive lunar geological fieldwork. This plan entails many challenges as real-time support from ground control cannot be provided to astronauts who thus need to become more autonomous. Hence, modern human-machine interfaces have to be designed to support astronauts during their future missions. The Augmented Reality (AR) Internet-of-Things (IoT) tool developed for this research introduces a new approach for astronauts to carry out geological activities. A user-centered design method was adopted to design the tool. Questionnaires were sent to and semi-structured interviews were held with ESA astronauts and geological field activities experts to assess the tool. The AR-IoT tool resulted to be potentially usable for future EVAs.
Conference Paper
Full-text available
Augmented Reality is on the rise with consumer-grade smart glasses becoming available in recent years. Those interested in deploying these head-mounted displays need to understand better the effect technology has on the end user. One key aspect potentially hindering the use is motion sickness, a known problem inherited from virtual reality, which so far remains under-explored. In this paper we address this problem by conducting an experiment with 142 subjects in three different industries: aviation, medical, and space. We evaluate whether the Microsoft HoloLens, an augmented reality head-mounted display, causes simulator sickness and how different symptom groups contribute to it (nausea, oculomotor and disorientation). Our findings suggest that the Microsoft HoloLens causes across all participants only negligible symptoms of simulator sickness. Most consumers who use it will face no symptoms while only few experience minimal discomfort in the training environments we tested it in.
Conference Paper
Full-text available
Order picking is one of the most complex and error-prone tasks that can be found in the industry. To support the workers, many order picking instruction systems have been proposed. A large number of systems focus on equipping the user with head-mounted displays or equipping the environment with projectors to support the workers. However combining the user-worn design dimension with in-situ projection has not been investigated in the area of order picking yet. With this paper, we aim to close this gap by introducing HelmetPickAR: a body-worn helmet using in-situ projection for supporting order picking. Through a user study with 16 participants we compare HelmetPickAR against a state-of-the-art Pick-by-Paper approach. The results reveal that HelmetPickAR leads to significantly less cognitive effort for the worker during order picking tasks. While no difference was found in errors and picking time, the placing time increases.
Conference Paper
Full-text available
Wearable and contextually aware technologies have great applicability in task guidance systems. Order picking is the task of collecting items from inventory in a warehouse and sorting them for distribution; this process accounts for about 60% of the total operational costs of these warehouses. Current practice in industry includes paper pick lists and pick-by-light systems. We evaluated order picking assisted by four approaches: head-up display (HUD); cart-mounted display (CMD); pick-by-light; and paper pick list. We report accuracy, error types, task time, subjective task load and user preferences for all four approaches. The findings suggest that pick-by-HUD and pick-by-CMD are superior on all metrics to the current practices of pick-by-paper and pick-by-light.
Conference Paper
Full-text available
Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.
Article
Full-text available
A location-aware navigation system has been developed and implemented for the visually disabled or visually impaired; the system is designed to improve individuals' independent mobility. This self-contained, portable system integrates several technologies, including mobile personal digital assistants, voice synthesis, a geographic information system (GIS), and a differential Global Positioning System (DGPS). The system is meant to augment the various sensory inputs available to the visually impaired user. It provides the user with navigation assistance, making use of voice cues iterating contextual building and feature information at regular intervals, through automatic GPS readings and a GIS database. To improve the efficiency of the retrieval of contextual information, an indexing method based on road segmentation was developed to replace the exhaustive search method. Experimental results show that the performance of the system on searching the buildings, landmarks, and other features around a road has been significantly improved by using this indexing method.
Conference Paper
In recent years, the entity--component--system pattern has become a fundamental feature of the software architectures of game-development environments such as Unity and Unreal, which are used extensively in developing 3D user interfaces. In these systems, UI components typically respond to events, requiring programmers to write application-specific callback functions. In some cases, components are organized in a hierarchy that is used to propagate events among vertically connected components. When components need to communicate horizontally, programmers must connect those components manually and register/unregister events as needed. Moreover, events and callback signatures may be incompatible, making modular UIs cumbersome to build and share within or across applications. To address these problems, we introduce a messaging framework, Mercury, to facilitate communication among components. We provide an overview of Mercury, outline its underlying protocol and how it propagates messages to responders using relay nodes, describe a reference implementation in Unity, and present example systems built using Mercury to explain its advantages.
Article
The M1T Press (Language, speech, and communication series), 2000, volume I: viii+495 pp, volume II: viii+565 pp; hardbound, ISBN 0-262-20120-8 (volume I), 0-262-20121-6 (volume II), 0-262-20122-4 (the set), $60.00 per volume, $110.00 the set Reviewed by Keith Allan Monash University Talmy is a pioneer of cognitive linguistics and something of an icon. His most impor-tant papers from 1972 to 1999 are re-presented here organized in terms of their subject matter. All papers have been revised, and most expanded. However, they are not par-ticularly well integrated in this massive work, which combines patches of brilliance with unnecessary wordiness and repetition--it needed a stern editor. 1 Volume I, con-sisting of an introduction and eight chapters, is "concerned with the patterns in which and the processes by which conceptual content is organized in language" (I.2). Vol-ume II, consisting of the identical introduction and (a different) eight chapters, presents typologies for the structure of concepts, proceeding to on-line processing of these in culture and narrative. Talmy's viewpoint is that "the relation between a linguistic ex-pression and something in the world cannot be direct but must, in effect 'pass through' the mind of the language user" (I.309, n. 14). Cognitive systems--language, percep-tion, reasoning, affect, attention, memory, cultural structure, and motor control--share some fundamental properties as well as each having unique structural properties; they are not autonomous like Fodor modules (I.16). "Semantics simply pertains to concep-tual content as it is organized in language" (I.4). Conceptual content includes affect and perception--only accessible through introspection, which must be employed with "controlled manipulation of linguistic material" (I.5)--that is, correlation with other findings across languages and other sciences. Talmy pays closer attention to meaning-form mappings than do many writers on semantics. He is best known for bringing the notions of Figure and Ground--a schematic system of attention into linguistics in the early 1970s. "In Figure-Ground organization, the entity that functions as the Figure of a situation attracts focal attention and is the entity whose characteristics and fate are of concern. Tile Ground entity is in the periphery of attention and functions as a reference entity used to characterize the Figural properties of concern" (I.12-13). It is within this framework that the book is written. 1 There are inconsistencies; for example, there is no explanation when it is first mentioned of Atsugewi's affiliation (it is a Hokan language of Northern California). Abbreviations are only occasionally explained. There are many minor typos. Also, I disagree with many of Talmy's grammaticality judgments--which would lead me to different analyses of a small portion of his material.
Conference Paper
We present our current state in developing and testing of Augmented Reality supported spaceflight procedures for intra-vehicular payload activities. Our vision is to support the ground team and the flight crew to author and operate easily AR guidelines without programming and AR knowledge. For visualization of the procedural instructions using an HMD, 3D registered visual aids are overlaid onto the payload model operated by additional voice control. Embedded informational resources (e.g., images and videos) are provided through a mobile tangible user interface. In a pilot study that was performed at the ESA European Astronaut Centre by application domain experts, we evaluated the performance, workload and acceptance by comparing our AR system with the conventional method of displaying PDF documents of the procedure.