ThesisPDF Available

Close encounters of the HCI kind: An ethnography of human-centered approaches in space technology

Authors:

Abstract and Figures

As NASA prepares to embark on long-duration crewed space exploration, solving issues related to the design of information systems and digital interfaces become increasingly important. Such systems have to enable informed decision-making for crews, who for the first time in the history of human space exploration, will begin conducting missions autonomously from Mission Control. This change must effectively replace dozens of flight controllers and other subject matter experts who up to this point, have been able to provide problem-solving capabilities through real-time communication. This thesis presents an ethnographic exploration of astronaut autonomy by unpacking the human-centered approaches observed through fifteen months of fieldwork on teams inside and outside of NASA. Observational, interview, and archival data were synthesized and analyzed to focus on two questions: How do systems engineers conceptualize user needs when designing useful systems to support astronaut autonomy? What is the emerging role of HCI in supporting deep space exploration? Our findings suggest that engineers display a tendency to prefer a technology-centered rather than a human-centered design approach. These findings also indicate that systems engineers may excel at engaging with human-centered design activities when such activities are invested in upfront and incorporate user needs into requirements. Additional insights describe the prioritization of system requirements over user needs, and how the absence of a human-centered culture causes problems later in the development process.
Content may be subject to copyright.
University of Michigan
School of Information
MTOP
Close encounters of the HCI kind: An ethnography of human-centered approaches in space technology
Matthew Garvin
Thesis Committee
Kentaro Toyama
Nilton Rennó
Date
04.19.2021
Matthew Garvin 2
ACKNOWLEDGEMENTS
I would like to thank my thesis advisor Kentaro Toyama. His door was always open and he was
always available to help steer me in the right direction when I needed it.
I would also like to thank Dr. Nilton Rennó for his guidance and support throughout. To Riley
Schnee, SEDS President, for bringing me into the fold and teaching me how to reach for the stars, I
am gratefully indebted to you.
Finally, I have to express my undying gratitude to my sister Marci, and her husband Craig, for their
insurmountable patience and support. I would not be here today without either of them. To Ruby and
Sully, Stewie, Finn, Fiona, Phillip, and Charley. To Bandit and Blackie. Thank you.
Matthew Garvin
Matthew Garvin 3
ABSTRACT
As NASA prepares to embark on long-duration crewed space exploration, solving issues related
to the design of information systems and digital interfaces become increasingly important. Such
systems have to enable informed decision-making for crews, who for the first time in the history
of human space exploration, will begin conducting missions autonomously from Mission Control.
This change must effectively replace dozens of flight controllers and other subject matter
experts who up to this point, have been able to provide problem-solving capabilities through
real-time communication. This thesis presents an ethnographic exploration of astronaut
autonomy by unpacking the human-centered approaches observed through fifteen months of
fieldwork on teams inside and outside of NASA. Observational, interview, and archival data
were synthesized and analyzed to focus on two questions: How do systems engineers
conceptualize user needs when designing useful systems to support astronaut autonomy? What
is the emerging role of HCI in supporting deep space exploration? Our findings suggest that
engineers display a tendency to prefer a technology-centered rather than a human-centered
design approach. These findings also indicate that systems engineers may excel at engaging
with human-centered design activities when such activities are invested in upfront and
incorporate user needs into requirements. Additional insights describe the prioritization of
system requirements over user needs, and how the absence of a human-centered culture
causes problems later in the development process.
Matthew Garvin 4
TABLE OF CONTENTS
ACKNOWLEDGEMENTS
ABSTRACT
TABLE OF CONTENTS
INTRODUCTION
RELATED WORK
HCI and human-centered design
Human-centered space tech
Human Research Program
RESEARCH METHOD
Ethnography
Access and data collection
Data analysis
AN ANALYSIS OF HUMAN-CENTEREDNESS IN SPACE TECHNOLOGY
Culture of technology
Toward Human-centeredness
Challenges
DISCUSSION
On the emerging role of HCI in space technology
CONCLUSION
REFERENCES
APPENDIX A - General Interview Protocol
APPENDIX B - Keywords for KWIC coding
APPENDIX C - Model-Based Systems Engineering Vee
APPENDIX D - Double Diamond Design model
APPENDIX E - Boy’s model of Human Systems Integration
Matthew Garvin 5
INTRODUCTION
NASA is going back to the Moon, and then to Mars. If current plans proceed as scheduled, the
Artemis mission will put American boots back on the lunar surface fifty-two years after the last
Apollo mission in 1972, only this time we plan to stay [1]. In that time, emphasis in computing
technology has shifted from hardware to software, from software to user-centered, and from
user-centered to socio-technical systems [2]. Supporting astronaut autonomy with optimal
interface design and “smart” technology is coming to the forefront. Here we define autonomy as
the degree to which an individual or team is free to make decisions and solve problems at their
discretion [3], [4]. To achieve these goals, technology not only has to be easy to use, but has to
support the ability for crews to operate autonomously from the dozens of flight controllers and
subject matter experts they currently have access to for real-time troubleshooting, just-in-time
training, system maintenance, and procedure execution [5], [6]. This increased need to design
for crew autonomy presents critical crew health and safety risks that require further investment
into interface design to minimize such risk and ensure good usability and user experience.
For these technologies to effectively bridge the gap left by the absence of real-time
communication with Mission Control, information systems, and digital interfaces have to be
designed with a focus on supporting the human user. Crew interfaces have to enable the ability
to adapt to the unforeseen rather than be developed with a brute-force autonomy common in
commercial “smart” technologies. What we find in practice is that while current autonomous
computer systems like autopilot are seen as reliable, Harris describes automation as promoting
boredom and disengagement when not properly designed with the human user in mind [7].
Complicating matters, such systems are designed to hand control back to the human at the first
sign of trouble, whether the human is ready or not, increasing the likelihood of what tends to be
classified as human error [8]. Often this human error is attributed to the end-user rather than the
numerous designers, engineers, or other stakeholders who have a hand in the build along the
way, and often the recommended solution is more training for the user.
Furthermore, Holden reports that many issues related to human-computer interaction are only
informally discussed and not followed up on because of a high level of confidence embedded in
the culture of the crew [5]. This may cause crew members to be more accepting of usability
issues if they think they can figure it out. Even concerning the International Space Station,
Holden explains that due to poor usability, many of the information displays are controlled by the
ground rather than the crew itself, a clear indication of an inadequate design process [5].
To develop insights concerning these challenges, we conducted design ethnography as a UX
designer on two project teams contributing to NASA competitions, followed by an internship as a
UI Architect within an element of the NASA Human Research Program. In this thesis, we report
on findings from an ethnography that involved a series of twelve semi-structured interviews and
an extensive period of participant observation.
The research questions include:
How do space engineers conceptualize user needs when designing information systems
to support astronaut autonomy?
Matthew Garvin 6
What is the emerging role of human-computer interaction in supporting astronaut
autonomy?
Our findings suggest a technology-centered culture that prioritizes system requirements over
user needs, and how this absence of a human-centered culture can cause problems later in the
development process. Additional insights indicate that systems engineers may excel at
engaging with human-centered design activities when such activities are invested in upfront and
incorporate user needs into requirements.
This work contributes to centering the human in human-centered design projects by
synthesizing human-computer interaction and human-systems integration. To improve the value
of the ideas and experiential outcomes generated by NASA University Design Challenges and
support Human Systems Integration at NASA in improving crewed space exploration
capabilities. The following Related Work section is organized to provide an overview of human-
centered design within the field of human-computer interaction, outline prior work in human-
centered space technology, and finally related work within NASA’s Human Research Program.
The remaining sections detail the data collection and analysis, present findings, and discuss
directions for future research.
RELATED WORK
Our research is framed around three areas of related work: HCI and human-centered design;
Human-centeredness in space technology; and NASA’s Human Research Program.
HCI and human-centered design
Much prior work exists on human-centered design in the field of HCI. ISO 9241-210:2010
emphasizes human-centered design in interactive systems, defining user experience as the
user’s perception and response while using an artifact, and that, “achieving a good UX is the
goal of the human-centered design approach,” [9]. In a review of CHI publications, Harrison et
al., discuss the emergence of the third paradigm in HCI [10]. The first being that of the human
factors engineer seeking to reduce errors. The second being that of the behavioral scientist
attempting to optimize the system through a reduction in ambiguity. This third wave, as
characterized by the authors, is that of the ethnographer, who is more interested in what goes
on around the system than what happens at the point of interface. Holtzblatt specified
contextual inquiry as a means of capturing qualitative insights to understand and specify the
context of use [11]. Hashizume extended this work to present a human-centered application of
contextual inquiry through ethnographic interviews [12]. Similarly, Baskerville and Myers argue
for a new kind of design ethnography [13], derived from applied anthropology, which
synthesizes participant observation with participatory design so that ethnographic insights can
be used to generate design concepts for the point of study. In terms of teaching HCI to
engineers, Galindo highlights the importance of fine-tuning the HCI curriculum for graduate-level
engineering students when specific domains are considered (e.g., aeronautics and aerospace),
serving as a reminder that not all mainstream HCI practice is compatible with regulatory
authorities in terms of life-critical interactive systems [14].
Matthew Garvin 7
Regarding human integration and the breaking down of the human-machine interface, Farooq
and Grudin discuss advanced technologies for human-machine communication which lead to a
codependent partnership, constructing meaning around each other’s activities and drawing
meaning from each other’s presence [15]. We find Schirner et al. seeking to move beyond
traditional interfaces toward electrophysiological signals [16]. Similarly, Lloyd et al. see a path to
move away from human-computer interaction toward a self-adaptive system utilizing brain-
computer interaction, suggesting that both humans and computers benefit from the identification
of the human's mental states to optimize autonomous and cooperative interactions [17].
Concerning self-adaptive systems, Huang et al. offer preliminary research on such systems
which can refine their behavior through “feeling'' and interpreting user intention via neural input
to effectively put the human in the loop [18]. However, we also find research related to issues of
trust in automation. Both Stanton and Lee urge for the design of human-computer interfaces
with semi-autonomous and autonomous systems that aim to keep the user informed and
maintain situational awareness [19], [20]. Gil et al. present a conceptual framework for
characterizing cooperation between the human and cyber-physical systems to improve human
integration [21]. Our research seeks to better understand how HCI is being adopted by space
engineering.
Human-centered space tech
Prior work on human-centered space technology was first highlighted as an approach to cockpit
design to promote decision making through human-centered automation and supplant a
technology-driven process [22][25]. Extending this work, Schutte describes a radically different
approach in human-centered design through using the human body as a metaphor in the design
of the aircraft [26]. Schutte argues that it would take more than wings to make humans
physically fly, such as internal navigation and legs capable of taking off and landing.
Furthermore, while we have overcome these limitations through brute force advances in
technology thus far [26], Brute Force Autonomy (BFA), is identified as a limitation in our ability to
develop autonomous systems intelligent enough to mitigate risk in supporting crew autonomy
[27]. Evidence highlighting factors including gaps in the users’ mental models of how the
automated systems work as well as weak feedback from the system as being strong
contributors to automation surprises [28], [29].
Human-centered computing (HCC) is emphasized in NASA’s Intelligent Systems Program as a
revolutionary approach to systems design intended to foment an epoch of space exploration in
which systems of humans and machines are optimized, freeing the human to engage in the kind
of cognitive activities we excel at [30]. Ames Research Center and Johnson Space Center found
a human-centered computing strategy proved exceptionally beneficial for collaborating on
intelligent systems for both manned and unmanned missions [31]. Whereas Zhang approaches
human-centered computing by adapting the theory of distributed cognition in the design of an
intelligent flight-surgeon console at NASA Johnson Space Center and presents a framework for
the human-centered design of such distributed information systems as electronic medical
records (EMR) [32]. Since then, NASA has demonstrated an increased interest in standardizing
a human-centered process and implementing it into the conceptualization and design of next-
Matthew Garvin 8
generation space technology to support crewed deep space exploration. Yet, despite this
enthusiasm for human-centered computing, little work has attempted to shed light on how these
concepts are implemented in current space engineering projects.
Human Research Program
NASA’s Human Research Program’s Roadmap is a living web document that captures the
broad spectrum of research covered within the HRP. While much of this research is beyond the
scope of this study, some of this work is directly related. For example, according to Holden,
issues stemming from inadequate HCI present a risk to mission success that could lead to a
wide range of consequences. While there is an increase in the amount of information necessary
to display, an astronaut’s physical capabilities to view information, as well as their cognitive
abilities to process such information, remain limited [5]. Holden goes on to note that there is
scant user data available in the context of operational spaceflight and that many issues related
to HCI have likely been concealed under constant contact with Mission Control. A recent
contextual inquiry into the role of the flight controller indicates that intelligent information
systems and autonomy-focused design are among the key capabilities required for future
missions in deep space [33]. Salazar offers human-centered design considerations for voice-
controlled spacecraft systems, where he stresses the importance of design process planning
being led by a Human Systems Integration (HSI) practitioner to ensure that HCD activities are
incorporated into all phases, including scheduling sufficient time for evaluations that lead to
further design iterations and optimizations [34].
Some space engineering teams focus specifically on human factors and HCI. For example, the
HCI Team at Ames, part of the Human-System Integration Division, is actively involved in
improving NASA design practices and contributes to improving interaction design to support
astronaut autonomy. Marquez presents a crew self-scheduling application called Playbook,
which was conceptualized and iterated over three years and tested in four analog missions at
NASA Extreme Environment Mission Operations (NEEMO) [35]. Findings indicated Playbook
could support crew self-scheduling. This is an important contribution to this point of inquiry
because of the cost of space exploration. Detailed schedules must be created and maintained
to ensure productivity and return on investment. Complicating matters is that scheduling
requires meticulous coordination and is currently performed by dozens of flight coordinators
working to ensure a myriad of activities involving specific pieces of equipment do not overlap
and/or have a sufficient buffer period between them. The Wearable Electronics and Applications
Research (WEAR) Lab at JSC is a collaborative effort between human factors and engineers to
increase crew autonomy to bridge the gap between the crew and spacecraft by enabling mutual
monitoring capabilities [36]. Moses sought to provide wearables to monitor CO2, an important
contribution in spaceflight as zero gravity in addition to a lack of natural convection limits air
circulation. This presents a risk to crew health that is not being properly monitored through fixed
sensors, whose readings may not adequately represent CO2 levels in locations near the crew.
While such work served to inform and guide this study through advances in technology to
support astronaut autonomy, our work explores assumptions and mental models that space
engineers hold about the astronaut as a human user of technology, and how such assumptions,
when left unchallenged, negatively impact the user experience.
Matthew Garvin 9
The analysis of this related work suggests that many challenges remain concerning the
development of autonomous systems and corresponding interfaces to effectively support
astronaut autonomy on crewed deep space missions. The findings presented in this thesis aim
to provide insights into human-centered approaches and how to accelerate their adoption.
RESEARCH METHOD
In this study, we sought to investigate how space engineers conceptualize the user experience
and use it to justify design decisions to support astronaut autonomy. To do this, we adapted the
design ethnography research method [13]. This involved the application of both participant
observation and participatory design over fifteen months of fieldwork divided among three
different project teams applying a human-centered approach to interface design as well as eight
semi-structured interviews with members of those teams and four semi-structured interviews
with subject matter experts in NASA. These subject matter experts, or SME’s include: an HSI
researcher (SME-1); a space human factors researcher (SME-2); a model-based systems
engineer (SME-3); an astronaut (SME-4). Much as the related work outlined, design
ethnography was selected as the ideal approach for this research because the method itself has
been adapted and used for decades in HCI to identify user needs, particularly concerning the
context of work [37]. In the following subsections, we describe the fieldwork timeline and
description of activities that took place. We then move on to an explanation of accessing and
engaging the contexts of immersion for data collection. Finally, we describe the triangulation of
multiple data sources through the application of grounded theory.
Ethnography
Ethnographic fieldwork involves a set of data collection strategies founded upon an approach
known as participant observation. Participant observation is a strategic method that, “puts you
where the action is and lets you collect data,” [38]. Typically, time must be spent upfront
engaging the field site and building rapport with the “locals'', essentially putting them at ease in
your presence enough to open up about their experiences. While the time needed can vary
depending on the researcher’s level of experience with the culture being studied, it is this level
of engagement that makes fieldwork so potent in uncovering implications and opportunities in
human-computer interactions [37], [39], [40].
This study was conducted across three different contexts: NASA SUITS, NASA X-Hab, and a
NASA internship embedded on a model-based systems engineering team within an element of
the Human Research Program. Working with the project team on the NASA SUITS challenge
served as the first site of engagement and allowed the ethnographer to gain access to the
project team working on NASA X-Hab. Experience with these teams facilitated access to a
NASA internship where these experiences both inside and outside of NASA could be evaluated
against each other and synthesized into theoretical findings grounded in data.
The NASA Spacesuit User Interface Technologies for Students (SUITS) is a design challenge to
engage with students in the ideation and creation of spacesuit information displays utilizing
Matthew Garvin 10
augmented reality. The SUITS Challenge specifies the primary objective as the development of
a user interface in a head-mounted display to support astronauts with extravehicular activities
(EVA) during a lunar mission.
The eXploration Systems and Habitation (X-Hab) Academic Innovation Challenge is but one of
several platforms NASA uses to build strategic partnerships with universities. This challenge, to
address gaps in the research and develop space technologies to improve human spaceflight
capability, was introduced in 2011 as a means of investing in a future in which NASA sets its
sights on building a sustainable presence on the moon, and then Mars. Participant observation
of this team served as the primary source of data collection. Interactions through meetings,
discussions, collaborative sessions, and co-design occurred daily through team message
boards.
NASA’s Human Research Program was developed in response to President George W. Bush’s
call for NASA to, “gain a new foothold on the moon and to prepare for new journeys to the
worlds beyond our own.” NASA’s Human Research Program (HRP) was re-established at
Johnson Space Center to develop capabilities and technologies alongside the Astronaut
Program and pursue further risk reduction expanding from space medicine and the physiological
and behavioral effects of long-duration spaceflight. The mission of the HRP is to facilitate space
exploration beyond low Earth orbit (LEO) through the reduction of risks and the development of
solutions to mitigate the impact on human health and performance, define habitability standards,
and deliver advanced support technologies.
Access and data collection
Initial access to the SUITS team was negotiated through the program manager. The leadership
team subsequently agreed to support fieldwork involving participant observation. After this initial
investigation, we recruited members for semi-structured interviews. Fieldwork with the SUITS
team was conducted over 12 months and resulted in 62+ contact hours through subteam,
administrative, and mass meetings. Access to the X-Hab team was negotiated through one of
the project’s co-leads. The fieldwork was readily supported by team members. Similar data were
collected over 4 months, resulting in 30+ contact hours. Access to NASA HRP was negotiated
through a lead scientist in one of the elements as a University Space Research Association
(USRA) intern. Fieldwork was supported by the lead scientist to conduct participant observation,
followed by semi-structured interviews. Fieldnotes were primarily captured through jottings while
‘hanging around’ in a virtual meeting or reading conversations in Slack or Teams. Additional
observations were logged in a digital notebook; however, these observations are limited as a
result of information export controls. Semi-structured interviews were related to general insights
and perspectives on HCI in space technology and were conducted outside of working hours.
In addition to fieldwork, data were collected through semi-structured interviews with past and
present leadership on both the SUITS and X-Hab teams, as well as leading human systems
integration engineers in the HRP, a NASA systems engineer, and a current astronaut. It is
important to note that all interview participants from the SUITS team were also prior members of
the X-Hab team. Interviewees spoke about their experiences participating in technical projects
Matthew Garvin 11
for NASA. They also discussed their motivation for exploring human-centered design in space
technology. Informed consent was given by all interview participants for sessions to be recorded
via Zoom. These recordings were then transcribed and used for analysis; n = 12, avg = 50 min,
sd = 5.52 min.
Table 1 Interviews and role of interviewees
Group
Role of interviewees
No. of interviews
1: (S)
SUITS team members
4
2: (X)
X-Hab team members
4
3: (SME)
NASA astronaut autonomy
subject matter experts
4
With conferences all virtual for the duration of this study, they became convenient and
advantageous sites for data collection. Throughout this study, we attended the Space Tech
Expo, Michigan Space Grant Consortium (MGSC) Fall Conference, and SEDS SpaceVision in
2020, as well as NASA’s Human Research Program (HRP) Investigator’s Workshop (IWS) in
February of 2021.
With this context, this study was oriented towards several questions related to better
understanding design and problem-solving processes as relevant to how engineers conceive of
the user experience. These questions influenced and guided systematic data collection as well
as drafting interview protocols (see Appendix A for a general interview guide).
What problems are NASA trying to solve with university challenges?
What value do university teams bring to NASA through these challenges?
How does background influence how engineers think of the user?
How do plans develop and change throughout the project?
How do systems engineers come to understand how to support astronauts in space?
How do engineers figure out how to design a good user experience?
Is there a certain process engineers follow when designing for humans?
Data analysis
This study’s ethnographic fieldwork was approached to explore and understand people’s
experiences with the human-centered design of crew-facing interfaces and informational displays.
During fieldwork within the Human Research Program, the emphasis shifted to observing systems
engineering and taking part in the design of interfaces in support of foundational research being
conducted by scientists and clinicians planning future missions. Throughout fifteen months of
fieldwork, we analyzed both primary and secondary data. Primary data included field notes, interview
Matthew Garvin 12
transcripts, and chat logs. Secondary data included presentation slides, meeting notes, reports, and
prototypes stored in team drive folders as archives from previous projects. First, we cleaned the
interview transcripts and prepared them for Keyword in Context, or KWIC, indexing. KWIC is a
derivative of concordance, a relatively ancient technique commonly applied to sacred texts to
produce an alphabetical list of all substantive words. KWIC was first proposed as an indexing system
in 1856 by the librarian, Andrea Crestadoro [41]. KWIC indexing preserves the keyword in the
context of the sentence in which it is used. In grounded theory, in vivo coding is the most common
approach and involves developing codes based on research subjects’ actual words. Creating a word
list and applying the KWIC are conceptual extensions of in vivo [38]. To do this, we filtered
transcripts to only include participant responses. These were then loaded into text analysis software
as a corpus of texts. A word list was generated to condense lemma word forms into a single type. In
morphology and lexicography, the lemma is the base word that is entered into a dictionary [42]. For
example, engineer/engineering/engineers were counted as a single word. This word list included
2245 unique lemma types. A total of 10 keywords were selected from the top 25 words used across
participants. These 10 keywords were identified based on their direct correlation to the research
questions (see Appendix B for keywords).
A KWIC list was then generated for each keyword. These lists include every occurrence of the
selected keyword within the context of the sentence in which it is used. The KWIC lists constituted
open coding. Some examples of concepts derived from this coding sequence include:
1. “It’s called the [System]s [Engineer]ing Vee.”
2. “We tried it out ourselves, but we didn’t do any formal [test]ing.”
3. “You have to convince everyone that your standard is better and [people] are reluctant to
throw something out completely.”
These initial codes then contributed to an axial coding process, where concepts were annotated
within interview transcripts and fieldnotes to capture patterns among data identified in open coding.
An example of axial codes includes, “engineers are receptive to human-centered design,” “safety is
the minimum viable product (MVP),” “human behavior is viewed as a black-box.” Finally, we
conducted a selective coding, to relate the various codes generated into a single narrative. These
three stages of coding reflect the application of grounded theory, where coding is an iterative
process. In between stages of coding, numerous memos were drafted, connecting pieces of data
and developing hypotheses to test through theoretical sampling, a core component of grounded
theory often missing from its application in the IS literature [43]. Given the generative focus of this
study, and ethnography in general, data analysis is ultimately tied to the overarching ethnographic
narrative. Through thick description, we illuminate participants' experiences in designing human-
centered digital interfaces for deep space exploration, and how they conceive of user needs among
system requirements. We frame our findings around “schools of thought” identified by informants
during an unstructured conversation captured in our fieldnotes.
AN ANALYSIS OF HUMAN-CENTEREDNESS IN SPACE TECHNOLOGY
Prior work has pointed out the different, but complementary problem-solving approaches of
systems thinking and design thinking [44], [45]. While at NASA, we were immersed in systems
engineering projects striving to adopt a human-centered approach. We learned that systems
Matthew Garvin 13
engineers think of human-centered design as part of the solution space rather than the problem
space. This was reinforced throughout our fieldwork in a variety of ways. For example, while
working with the SUITS team, we observed that the engineers brought in designers to do the
UI/UX work in creating a “slick” front-end for the system only after the features and functionality
were determined so that the design and the build took place concurrently. Throughout the
internship, we observed that many of the activities space engineers engage in are similar to
human-centered design activities, such as writing scenarios and use cases to aid in defining
system requirements. However, we found such scenarios lack the human-centered touch of
incorporating insights derived from user research to model user behavior and better inform such
requirements with user needs. Unstructured interviews during fieldwork with the X-Hab team
shed light on variations in attitudes among space engineers regarding human space exploration,
revealing that while sending humans to space is a good thing because it, in the words of X-2,
“justifies NASA’s budget,” humans in space are so risky and expensive, many space engineers
may be more inclined to design and send robots instead.
In the following subsections, we present patterns in a space engineering culture that emerged
throughout our fieldwork. We begin by describing the culture that dominated our viewpoint from
within NASA and which seems to prevail within the broader space industry. We accomplish this
in part by deciphering a key symbol that appears to unite space engineers with the engineering
community as a whole, the Vee model. This finding suggests that established processes narrow
engineers’ focus to achieving technical excellence. Next, we consider groups both inside and
outside of NASA and describe their efforts to adopt and implement a human-centered design
approach, namely through human-systems integration. These groups are connected by the
common cause of disrupting the established order and view this as necessary to hasten the
adoption of human-centered design to meet NASA’s mission to sustain an autonomous human
presence beyond low Earth orbit. Finally, we consider challenges that emerge with respect to
integrating HCI and HCD activities into the Vee model. This finding presents a closer look at the
tensions between employing an agile, iterative process within the larger context of an
organization that leverages its history and traditions of a linear, waterfall process to resist
change and negatively impacts the innovation that comes with it.
Culture of technology
Because most of the people studied throughout this fieldwork are trained as engineers, we were
able to identify some common practices that many appear to take for granted. One such
example that stood out as being particularly prominent among engineers in general, is the
application of the Vee model. The Vee is quite literally a V-shaped model which comprises steps
in flowing down the requirements and designing the system on the left, then integrating the
systems and verifying them on the right, culminating with the validation of the complete system
on the top-right. These requirements are then used to guide the software development in
building the system and realizing the vision of the stakeholders, all while staying on schedule
and within budget. As X-3 notes:
Matthew Garvin 14
“You start at the beginning of the Vee, then you go down to feasibility. It’s a very linear process.
You just follow the Vee until it ends. Right, then downward, and then an upward kind of process.
And the theory is that it leads to the least amount of cost overruns and mistakes because you’ve
defined the system at such a level that you shouldn’t have very many issues.” ~ X-3
Here we see the significance of the V-shape as illustrating the process of breaking down high-
level human requirements into low-level system requirements and assembling these
components to build the system back up. The overarching concern of systems engineering is
ensuring the system that gets built matches the vision of the stakeholders. This creates a
technology-centered culture in which engineers are predominantly focused on technical
optimization or making the system work and extending the range at which it works. Consider
this quote from SME-2, a leading space human factors investigator at NASA, about the space
engineers they work with:
“They’re very focused on making the thing work. And that’s great because we need somebody
focused on making the thing work, but that’s where we play an important role to remind them
about this other thing.” ~ SME-2.
And compare it to this quote from X-1, an undergraduate aerospace student preparing the enter
the industry:
“I have been trained as an aerospace engineer, and our primary goal, at least the way we’re
taught, is to try and maximize the range that something can operate. Essentially, you try to work
on increasing those ranges and you don’t necessarily care about day-to-day operation because
you need to design it and it has to work.” - X-1
Similarly, we find evidence of technology-centered culture constraining the ideation of potentially
even better solutions just outside of NASA, in one of what is known as the Big Four, referring to
Northrop Grumman, Boeing, Lockheed Martin, and Raytheon. Once requirements are defined,
everything that comes out of the project must be traced back to them. This means that if user
needs are not taken into consideration when writing requirements, they risk not being
considered at all. Here X-4 provides an example of how working in the private space industry as
an engineer building a system for NASA, is constrained to previously identified requirements:
“At work recently we were showing crew members through our CAD models and one of the things
that struck me was how restrictive it is in terms of requirements. Like everything has to do with
just meeting requirements, and sometimes there isn’t money to come up with a nicer, more
creative solution, so you’re very limited in what you can do.” ~ X-4
In the Vee, decomposition continues until various elements of the system cannot be broken
down any further. This is what is referred to as a “black box”. In systems engineering, black
boxes are those system elements in which the inner workings cannot be known. This concept
yielded an interesting admission from X-3 who, when asked what the X-Hab team’s earlier
efforts were to understand the user, said somewhat sarcastically:
Matthew Garvin 15
“Not much. They were like a black box. Our earlier efforts were very top-level.” ~ X-3
Later, we followed up with X-3 to elaborate:
“By black box, I mean we just wouldn’t necessarily consider the user’s emotional state. We were
really just thinking about physical practicality. But a lot of how we think about the user is
dependent on whether or not the user is considered a value to the design. And most
space/aerospace stuff is trying to keep the user out of it, and then they have to adapt.” ~ X-3
Keeping the user out of it and expecting them to adapt to the system is another common pattern
of behavior we observed. CRASH is a non-crew-facing suite of tools that are designed to aid in
trade study simulations which in turn will inform the design of crew-facing systems. Throughout
our internship, we spent twenty-five percent of our time supporting human factors on CRASH.
During a presentation of usability test findings and recommendations from the senior human
factors engineers working on CRASH, the systems and software engineers alike made a habit
of blaming test participants for their inability to use the system. We observed one senior
software architect interrupting each finding by loudly blurting out, “Training!” Recommendations
to improve the interface were swiftly dismissed as being impossible or inappropriate given the
level of development the project had already undergone. After a recommendation to improve
contrast in support of legibility, another software engineer blamed the user’s computer color
settings and suggested we add a bullet point in the user manual indicating that they should
adjust these settings when using CRASH for optimal performance. Although this project was
touted during its recent System Requirements Review (SRR) as being human-centered and
emphasizing the use of two human factors researchers, in practice CRASH predominantly still
utilizes a technology-centered approach through the application of the Vee model and consults
with human factors researchers primarily to test and validate the system. To further demonstrate
the technical dominance of the engineers working on CRASH, during our internship, we
contributed to upfront user research through a card sorting study for the latest addition to the
CRASH suite of tools. This latest addition is billed as a collaboration hub. As it turns out, these
tools are too complicated for the intended users to use them. To solve this problem, the CRASH
team has opted for more, rather than fewer service tickets. CRASH operators will be trained to
handle service requests, and users can anticipate an average response time of three to six
months.
Where upfront research is concerned in space engineering, such as when defining
requirements, stakeholders make up a broad category that technically includes any human who
has a hand in the system. This includes customers, managers, users, and anybody else who
may be considered a part of that system. Because of the technological centeredness of space
engineering culture, this can lead to ill-defined stakeholder groups in which engineers may
mistake or neglect user needs in favor of meeting customer requirements. In their concept of
operations, for example, when we first started research with the X-Hab team, the user of a voice
interaction management system for the Lunar Gateway was broadly defined as NASA itself.
Here we interpret this to mean that although such a system is intended for astronauts to interact
with the autonomous systems aboard the Lunar Gateway, NASA appears to be defined as the
Matthew Garvin 16
user because the X-Hab team is delivering the system to NASA to use as it sees fit. However,
we also witnessed this during the internship where because human-centered resources were
applied ad hoc, user research and usability tests were often used as an opportunity to gain
further input and feedback from decision-makers rather than representative users.
The Vee used by our element of the Human Research Program may provide a rationale for why
this is (see Appendix C for this Vee model). The Vee indicates that design is part of the solution
space rather than the problem space. With this approach, we see that human-centered design
does not begin until stakeholder requirements have been decomposed into system
requirements. If systems engineers think of the human user as a black box like how X-3
describes in which only the physiological inputs and outputs can be known, then it would follow
that user needs beyond those basic health and safety needs would not begin to be considered
until system requirements reach this level. This would explain why clinicians are consulted when
gathering stakeholder requirements instead of need assessment with representative users.
Understanding this logic is important because, at NASA, systems engineering is described as a
logical way of thinking [46]. As X-2 describes it, many space engineers already understand what
an astronaut needs:
“It’s kind of funny because most of us are avid space fans, so when we talk about researching
what the astronauts need, we kind of already know.” ~ X-2
But X-4 had a different view from within the industry:
“The only way we think about user needs is from a personal perspective because they are usually
not defined unless you’re working specifically on HSI. So they’re not in my requirements, my
requirement is more like: The system needs this box. If the system needs the box, then somehow
the crew needs the box, and I have to figure out the best place to put the box. But what the crew
needs is not well defined.” ~ X-4
Given the technology-centeredness of systems engineering culture, when user needs are not
considered upfront, they will likely not be considered unless they are shown to be a design flaw
during testing, or as human error in operation. When it comes to teaming an astronaut with an
autonomous system, this can present additional risk to crew health and safety as well as
mission success. Choosing what tasks to automate and what tasks to burden the user with is an
important design decision. This quote from SME-2 captures the tension between technology-
centered and human-centered culture:
“I was in a meeting just a couple of hours ago asking questions and the answers I was getting
back was that this is just how the system works. And they were showing me diagrams and
architecture behind the scenes. I don’t care about any of that, I care about what the crew is going
to see and what they have to keep track of. Even after I convinced them of a couple of points
where it would be better for automation to keep track of this thing, they said they couldn’t change
it because they don’t have a requirement for it.” ~ SME-2
Matthew Garvin 17
To better understand how to design such a system, systems engineers write scenarios and
create diagrams using something like the System Modeling Language, or SysML, to show how
human users might interact with a system. These scenarios are often impersonal, referring to
the user as “crewmember”, and placing emphasis on fleshing out how the technology functions
per requirements. While such scenarios go through several rounds of review before being
approved to incorporate into the concept of operations, they are not based on user data, but
rather reviewed and approved by clinicians. For example, while working with model-based
systems engineers to conceptualize a medical system for long-duration missions, we were
tasked with writing a caregiver as a patient scenario. On long-duration spaceflight, it is a
requirement that a medical doctor will serve as a member of the crew. Here, systems engineers
are tasked with investigating and mitigating risk with the system. In this case, as we learned, a
risk that could negatively impact the success of the mission would be if the medical doctor
became injured or otherwise incapacitated. How might such a medical system effectively
support other crew members in providing sufficient medical care when the doctor is the patient?
Having nothing to go on in terms of user research and having just come from working with the
X-Hab team on a voice assistant for the Lunar Gateway, we proposed voice interaction utilizing
this system as well as a scenario in which the crewmember interacted with the system via the
touchscreen on a tablet. Use of the voice assistant was rejected in review, in favor of a more
sensible minimum viable product, e.g., the tablet interface. In this case, the voice interface,
though being a hands-free option that would be of benefit to crewmembers trying to conduct
just-in-time training while applying treatment, was seen as more of untested, fringe technology.
Here again, we see a focus on catering to the system instead of putting the human at the
center. SME-2 described a major technological challenge of extended missions beyond low
Earth orbit and NASA’s solution:
“We know these laptops and tablets going on Gateway are going to die after prolonged exposure
to radiation, but they’re cheaper, so our [NASA] plan is to send replacements, and then the
astronauts are just going to have to reboot.” ~ SME-2
When characterizing the collaborative efforts between systems engineers and clinicians to
design candidate medical systems, SME-3 described the emphasis being on the little details.
Mass and volume must be carefully calculated with variations in supplies, so trade studies must
be run to determine the optimal amount of each supply. In this regard, the emphasis is not on
the critical incidents that have a low probability of occurring, but rather on making sure crews
are sent into space with just enough medicine to prevent renal stone formation, enough
headache medicine to keep crews productive, and just enough band-aids to ensure the minor
cuts and scrapes crews are known to endure do not become something more serious. In human
space exploration, every additional day a crew can spend in space beyond low-Earth orbit is
maximizing and extending the range that the system as a whole can operate.
Toward Human centeredness
There are, however, groups of people both inside and outside of NASA seeking to improve
human-centered maturity in space technology. And some of these efforts are having a positive
impact. Human systems integration (HSI) practitioners have adapted a human-centered
Matthew Garvin 18
approach to the systems engineering process. NASA facilitates several programs as design
challenges, like SUITS and X-Hab, to generate a more diverse array of ideas. We observed
teams within NASA proudly going against the grain of the waterfall model in adopting a sprint-
driven iterative and agile approach typical of human-centered design processes. Younger space
engineers tend to display an eagerness to adopt the human-centered design trends and
processes that proliferate in the broader technology industry. And successes in emphasizing a
more human-centered approach in newer space companies like SpaceX and Blue Origin are
bringing in a different perspective, as noted by X-3 and SME-2:
“Where it’s gotten really interesting is that NASA is now working more closely with the commercial
crew providers like SpaceX, Boeing, and Northrop Grumman that are challenging the status quo
at NASA. Whereas we might be slow to change because we were successful in doing something
with the shuttle program or on the ISS, these companies are actively looking for better solutions.”
~ SME-2
“NASA hasn’t always been the most ergonomically friendly. They would stick an astronaut in a
can if it works. The same thing with the Russians, like the Soyuz, you are basically stuffing them
into this really cramped space. But now we’re seeing SpaceX and some other companies starting
to embrace this more user-centric approach.” ~ X-3
According to SME-2, NASA has long been proponents of human-centered design, which is
written into a lot of documentation. They go on to admit that while there are still a lot of teams
that probably do not use human-centered design, there is a growing recognition of the positive
impact it has on the systems engineering process:
“A lot of our programs now take advantage of our human systems integration working group. A lot
of teams are actively involving an HSI role in their projects. Now we have all these forums and
other mechanisms to spread the word about human-centered design and other related topics.” ~
SME-2
At NASA, SME-1 describes their efforts to shift perceptions of human-centered design away
from just the solution space and into the problem space through user research:
“From the perspective of HCI, we also do research. When I started engaging more in this domain,
I was really pushing for more research in the area of HCI so that we can continue tackling the
more cutting-edge tasks and infuse human factors into future operations.” ~ SME-1
SME-1 goes on to discuss their experience involving the importance of a human-centered
approach:
“We find that if we do engage the end-users, and successfully convince our stakeholders that this
process is valuable, you end up with a better tool. I have seen time and time again that it is a
successful process. Designing and implementing software without that context, the resulting
software is brittle and requires continuous maintenance, which doesn’t even exist in continuous
development.” ~ SME-1
Matthew Garvin 19
Every project we worked on within NASA was defined as operating within a human-centered
approach. Our mentor was one of the contributing authors to the Human Systems Integration
Handbook. The most readily identifiable method denoting HCD that we observed in NASA
concerning these projects is that of the agile approach. During the SRR for CRASH, the use of
the agile process was emphasized as being human-centered and running counter to the more
traditional process at NASA, the waterfall. The space engineer presenting this approach
describes NASA as, “Waterfall to a T.” Here again, we see the invocation of the Vee model
which, according to Boy, is an extension of the waterfall model of software development [47].
Projects like CRASH are paving the way forward towards a more human-centered NASA.
Some teams within NASA want to explore the feasibility of an idea or use of technology without
having to commit resources. To stimulate innovation, NASA’s solution has been to build
relationships with universities through Academic Innovation challenges like X-Hab and SUITS.
When asking members of the SUITS and X-Hab teams what value they felt that students
brought to NASA through university challenges, the most common response was free labor.
According to these teams, NASA stakeholders benefit from quickly soliciting divergent ideas
without having to spend time and effort to generate them internally. X-3 describes the common
perception the X-Hab team shared regarding the purpose of the X-Hab challenge:
“What NASA does with the X-Hab challenge is they take university students and have them do a
lot of this preliminary research on basic feasibility studies. Like talking about our project this year,
is voice control on a space station something we would even want to pursue in the future?” ~ X-3
S-1 similarly describes SUITS as a means of soliciting a range of ideas:
“When we went to Houston to present our system at JSC, the researchers were pretty much just
taking notes on the different teams' presentations. It seemed like a way for them to rapidly
prototype without having to provide the resources for it.” ~ S-1
In recent years, particularly following the official announcement of the Artemis program in 2019
[48], the focus of these challenges has become increasingly human-centered. Both the SUITS
and the X-Hab teams displayed an interest in developing human-centered projects in their
written proposals. Both teams leveraged the expertise of a retired astronaut with experience
conducting missions on the International Space Station, as well as other industry contacts they
had made both through faculty advisors as well as subsequent years of working with NASA on
technical projects. During the SUITS 2020 challenge, NASA provided teams the opportunity to
gain insights from subject matter experts through monthly web presentations. The SME would
give a short presentation on a related topic followed by a Q&A session which afforded all teams
the opportunity to ask questions and benefit from the responses provided to other team’s
questions. The SUITS team typically made an evening of it. Before social distancing as a public
health and safety measure, core members of the SUITS team would throw watch parties.
Presenters included both current and former astronauts, planetary scientists, and HSI
researchers. Notes were taken in a shared, collaborative document so those insights could be
collectively consumed, promoting alignment on new findings.
Matthew Garvin 20
On the SUITS team, we used this data to create user personas by employing an approach
referred to as provisional, or ad-hoc personas [49], [50]. This involved synthesizing data
gathered from the web presentations with brief profiles of all the recently named Artemis
generation astronauts. During fieldwork with the X-Hab team for the 2021 challenge, our project
emphasized the use of design personas in a different way. The team had proposed to develop a
voice interaction management system to work with the NASA Platform for Autonomous Systems
(NPAS). This project was situated in the context of enabling crew members to interact with the
Lunar Gateway, a modern space station set to orbit the moon and eventually serve as an
outpost and launchpad to send crews to Mars. As the team began conducting preliminary
research into what was needed to design such a system, we soon learned about the importance
of designing the personality of a voice interface. When designing interfaces that rely on voice
interactions, there is an added challenge of designing a system personality to support
interaction through establishing a relationship built on trust, so that the human and the system
can function together as a team [51][53]. We led the X-Hab team, composed primarily of space
engineers, through the process of creating a system persona to effectively support crew
interactions on Gateway. Through a series of brainstorming and dot voting activities aimed at
engaging the X-Hab team in human-centered design through the Double Diamond design model
(see Appendix D for the Double Diamond model), we exposed and challenged assumptions
team members had about astronaut users. One such assumption the X-Hab team collectively
found interesting, was the consideration for psychological well-being. Discussing it with the
team, we learned that after spending the majority of 2020 in relative isolation, team members
expressed greater empathy for the isolation astronauts experience in space, which manifested
in the form of greater effort being applied to supporting that need with the system.
Using this information, the X-Hab team expressed enthusiasm for the adoption of the human-
centered design approach:
“Honestly, what myself and everyone else, including our stakeholders at NASA, has been saying
is how lucky we have been to have an HCI expert on the team to encourage us to use human-
centered design. We would have come up with something, but I don’t think it would be anywhere
near as in-depth as what we’ve been able to accomplish using HCD.” ~ X-2
During the X-Hab team’s Preliminary Design Review (PDR), one such NASA stakeholder
reviewing our progress noted how different our human-centered approach and the ideas that
came out of it were in comparison to other teams within the agency (NASA) upon which they sit
related to trusted autonomy. They went on to note how refreshing it was to have a different
approach, and about the author, commented on how they believed this was a reflection of the
diversity of the team. We know this comment was about the author because everyone else on
the team was in space/aerospace engineering, except for one applied physicist. This led to our
ultimate finding during fieldwork with the X-Hab team, that space engineers may excel at
incorporating human-centered design.
Matthew Garvin 21
This finding is shared by the X-Hab team’s NASA stakeholders. While they did not explicitly use
the words human-centered, they did express an enthusiasm for having younger engineers
designing interfaces. This is because, according to them, younger engineers had grown up with
video games and other modern technologies, as confirmed by X-1:
“Last year, they would talk a lot about how they wanted a generation that had grown up with video
games and iPhones to design the interfaces for everyone.” ~ X-1
Challenges
Despite these efforts and successes in applying more human-centered approaches, challenges
remain, and much can be improved. While there is an acknowledgment that more modern
technologies and approaches to designing them are necessary for the future of human
spaceflight, past successes can serve to disincentivize in-house exploration of innovative
solutions:
“I think, at least in my experience, the reason that most people don’t accept it at NASA is that
they don’t believe there’s a benefit. A lot of times NASA has a perspective of like, if it ain’t broke,
don’t change it.” ~ SME-1
“We are constantly battling the mindset of this is how we’ve always done it. This is how Shuttle
did it, this is how ISS did it. Which can be a good thing in terms of safety in order to express a
high level of confidence. But sometimes we have to question whether or not we need to go that
route.” ~ SME-2
From our informants, we frequently heard NASA described as slow. Not only in adopting new
processes, but even simply in making use of their resources. In October of 2020, NASA
stakeholders informed the X-Hab team they were getting software licenses to build their voice
assistant using G2:
“Just from past experience, we knew that NASA can be slow at responding for things. Knowing
that ahead of time, we kind of gave them a deadline of if we didn’t get the licenses by the SDR,
we were going to look for other options.” ~ X-2
Regarding the innovative ideas coming from university teams like X-Hab and SUITS, there is a
challenge in merging them back into NASA:
“I always felt that the work on the X-Hab challenge was more free than it is now that I’m a
professional in the industry. Like, now I can’t go explore something in VR because everything is a
monetary decision. So I think NASA appreciates the open-ended creativity, but I always felt like
after we were done with our projects and delivered our paper, they didn’t go anywhere.” ~ X-4
“In some ways, it’s a lot of value because NASA is a government agency and sometimes it’s very
hard to do innovative things. Engaging students allows us to be a little more open-ended and also
just bring in new ideas. And so from that perspective, there’s a lot of value. The reason why I say
it’s hard is that because those ideas are innovative, they are really hard to merge back into
Matthew Garvin 22
NASA. Like, we very much want to grow, but we can be held back by bureaucracy and red tape.”
~ SME-1
During the NASA internship, we experienced firsthand how slow the organization can be
because of such bureaucracy. While our supervisor was elated to learn that we had received
our NASA laptop before the first day of work, something they noted had never happened before,
we spent the first month of the internship reminding the IT team that they needed to install a
specific piece of software before we could begin work. Another piece of software, specifically a
design tool required for a key deliverable our supervisor recruited us for, was never installed
throughout the four-month internship. Strict limitations on software create a prohibitive
environment for design activities. Prototypes often must be hardcoded, slowing iterative
cadence and increasing effort among the development team. For example, when participating in
a usability test for CRASH, we were forced to take remote control of the facilitator’s computer to
interact with the prototype they had installed on their computer.
The CRASH usability test brings up another challenge to implementing a human-centered
approach, which is a tendency to recruit non-representative users. Recruiting other engineering
students outside of the project to test prototypes for the X-Hab and SUITS challenges may not
be ideal, but in terms of feasibility, this is often as close as such teams will get to representative
users. At the start of the internship, we knew part of our time was to be spent working on
CRASH, so when our supervisor offered us up as a candidate for usability testing, we assumed
this was for pilot testing with an added bonus of getting familiar with the system. However, our
data were included in the presentation of the findings discussed in the earlier subsection.
User researchers struggle with recruiting representative users at NASA for a variety of reasons.
As we previously mentioned, sometimes the actual user group is ill-defined, as in the case of
the model interfaces we designed. Also, given that NASA is a government agency conducting
internal tests, it was our experience that researchers are unable to incentivize participation in
user research. On top of this, because of NASA’s dominant engineering culture, quantitative
metrics and scientific rigor prevail. To use the CRASH usability test as a further example of this,
the test itself was two hours long, spread out across two different tools within the CRASH suite,
and accompanied by extensive questionnaires to calculate cognitive loading and the system
usability score (SUS). This finding was reinforced by SME-1, who in contrast to our experience
with CRASH, conducts a smaller, more frequent round of usability testing:
“We also do usability testing at scale. I’ve seen some people do very large usability studies. And
they’re trying to do statistics on dozens of users. My team’s stance is that we just need a small-
scale iteration to get feedback about, so we get five people to use it and give feedback.” ~ SME-1
This lack of current knowledge regarding current tools and practices in HCI seems to extend to
an out-of-date understanding of HCI theory. Almost as ubiquitous as the Vee model was Miller’s
1956 model of human information processing [54]. In our fieldwork, this model was first
mentioned by a member of the X-Hab team, and later during the CRASH presentation where
the senior human factors researcher referenced Miller’s model in justifying the human-centered
Matthew Garvin 23
approach of the project. As another example, SME-2 describes a challenge in designing for
astronaut autonomy:
“Designing for autonomy means making things simple, not trying to build too much complexity.
Right now what the crew tends to like, and these developer groups want to do is put everything
you need on the display, and it gets very cluttered and hard to read. We need to think about
clever ways to have all the information available as needed while only showing them what’s
needed for the task at hand.” ~ SME-2
DISCUSSION
Although Olson unpacks the symbolic meaning of ‘X’ in the space industry as that of the
‘extreme’ [55], in our study we found it to be rather straightforward in symbolizing ‘exploration’.
This can be seen in SpaceX (Space Exploration Technologies Corp.), eXploration Habitat (X-
Hab), ExMC (Exploration Medical Capability), etc. However, while Olson made similar
observations to this study regarding a NASA culture that is predominantly steeped in the
technoscientific, what resonates with the American culture, the humans that fund NASA, is the
human exploration of space as the final frontier. As former President George W. Bush unveiled
his vision for space exploration he said, “This cause of exploration and discovery is not an
option we choose; it is a desire written in the human heart,” [56]. Within the broader literature on
human space exploration, allusions to the human spirit and desire to endure great risk for the
sake of exploration proliferate [57][59]. Humans decide to go to space and convince other
humans to join them in this effort. Entire nations of people have rallied around the idea that
humans in space are good. One of the subject matter experts whom we worked closely with
mentioned that they had only recently joined NASA in the past year despite a long career in the
field. Upon receiving the offer, they spoke to their boss at the time who encouraged them to take
the position saying, “NASA is the reason we became systems engineers.” Space exploration is
inherently human-centered.
On the emerging role of HCI in space technology
Human-centered design and HCI opportunities abound concerning the support of astronaut
autonomy. And this remains the case across the spectrum of HCI paradigms [10]. The
challenges posed by circumstances of crewed deep space exploration should resonate among
many HCI scholars as an opportunity to contribute to such complex, interdisciplinary projects.
Such challenges not only involve shaping the direction of human-computer interfaces for space
travel and exploration but in helping to improve the human-centered work processes of the
engineers conceptualizing and building these systems. This latter challenge encompassed our
internship within NASA which began in January of 2021. As mentioned in our findings, during
the CRASH SRR presentation, a space engineer touted the team’s success in utilizing a
human-centered approach along with an agile process, highlighting the challenges in doing so
within NASA, which they described as being, “Waterfall to a T.” Guy Boy describes the Vee as
an extension of the waterfall process and notes its widespread use across the spectrum of
engineering disciplines [47]. Boy observes that most problems in the validation phase come
from a lack of human-centeredness or organizational-centeredness in the requirements. This is
Matthew Garvin 24
the flaw of technology-centered engineering, where systems engineers conceptualize a system,
software engineers build the system, then human factors engineers reduce the probability or
rate of human error to acceptable limits. Boy’s model of human-systems integration reinforces
our findings that putting the user first not only reduces risk but is cheaper and ultimately results
in a better product that requires less training and maintenance (see Appendix E for Boy’s model
of Human-System Integration).
The parallels between systems engineering and the HCI process are evident. The Vee model
outlines how engineers gather requirements, design a system to meet them, build the system,
then validate the system. The design thinking process similarly outlines steps to empathize,
define, ideate, prototype, and test. Greene compared the historical context and evolution of
engineering systems thinking and design thinking as complementary approaches to systems
engineering and design [44]. The primary difference between these two approaches is that of
exploration. As our informants suggest, the Vee is a very logical, orderly, and linear process.
Oftentimes the design thinking process comes with the disclaimer that it is not a linear process.
When designers are allowed to explore the problem space instead of being constrained to the
solution space, human-centered design can thrive. We observed space engineers display skill in
incorporating a human-centered design approach in personifying their system to better support
the user.
There are some limitations regarding the study. We engaged in only twelve semi-structured
interviews across three field sites over a year. A year that was unordinary, to say the least,
concerning the global COVID pandemic rendering all work remote. We would have liked to conduct
interviews with more people as well as more follow-up interviews. However, given the difficulty, we
experienced in recruiting for interviews we felt that pushing for more would have put unnecessary
strain on the rapport we had built with members of these working groups. Instead, we relied more on
unstructured interviews and observation, learning by doing and creating opportunities to gain
relevant insights in the spur of the moment. We believe that our findings are sound about systems
engineering in the space industry, but that no broad generalizations can be made of systems
engineering or engineering in general, especially outside of the space industry.
In terms of future work, we outline some areas in which we believe an emerging role for HCI in
supporting astronaut autonomy will be most prominent.
CSCW, and the future of work: In the future, many people will work in space. The
recent stretch of remote work has provided insights into the rudimentary kinds of human
problems we might expect to face in the future where computer-supported collaborative
work will eventually extend across the solar system. Conducting more exploratory
research will be necessary to lay the foundation and take lessons learned concerning
CSCW on Earth and improve upon them in space.
Design systems for human-autonomy teaming: Consistency, standardization, and
reusability are key to promoting good usability and a positive user experience. In recent
years, design systems have become relatively ubiquitous in the technology industry.
Such design systems promote an agile cadence through user-validated design elements
Matthew Garvin 25
accompanied by clean code snippets to speed development. Design systems further
help to reduce disagreements between designers and developers by serving as the
single source of truth.
It is important to remember that careful design considerations must be made upfront. Once
humans adapt to a certain interaction pattern or technological device, it becomes more difficult
to introduce something drastically different, even if it offers a substantial improvement on the
surface. Another opportunity for future work would be to explore cultural transmission in isolated
pockets of human environments in space as national and international space habitats begin to
proliferate on and around the moon as well as Mars. This will be important in better
understanding how humans adapt to extreme environments in the socio-cultural context.
CONCLUSION
This thesis presented findings from a fifteen-month ethnographic study of how systems
engineers conceptualize the user experience and use it to justify design decisions about
supporting astronaut autonomy. Data collection revolved around participant observation,
unstructured and semi-structured interviews, and team data sharing. All these sources were
integrated for analysis via grounded theory and self-reflection.
Contributions of this work include (1) empirical research in technology-centered cultures for
human-centered design; (2) descriptions of efforts and impact in propelling human-centered
design-forward within NASA; (3) descriptions of some challenges that remain. We believe this
research opens up a range of future work engaging with astronautics programs and internal
digital transformation efforts for the sake of supporting astronaut autonomy and promoting the
eventual expansion of humans throughout the solar system.
Matthew Garvin 26
REFERENCES
[1] M. Smith et al., “The Artemis Program: An Overview of NASA’s Activities to
Return Humans to the Moon,” in 2020 IEEE Aerospace Conference, 2020, pp. 110.
[2] B. Whitworth and A. Ahmad, The social design of technical systems: Building
technologies for communities. The Interaction Design Foundation, 2014.
[3] N. Kanas, “Autonomy and the crewground interaction,” in Humans in space,
Springer, 2015, pp. 99107.
[4] D. J. Leach, T. D. Wall, S. G. Rogelberg, and P. R. Jackson, “Team autonomy,
performance, and member job strain: Uncovering the teamwork KSA link,” Applied
Psychology, vol. 54, no. 1, pp. 124, 2005.
[5] K. Holden, N. Ezer, and G. Vos, “Evidence report: Risk of inadequate human-
computer interaction,” 2013.
[6] S. Hillenius, “Designing interfaces for astronaut autonomy in space,” 2015.
[7] S. Harris and B. Simpson, “Human Error and the International Space Station:
Challenges and Triumphs in Science Operations,” in 14th International Conference on
Space Operations, 2016, p. 2406.
[8] P. C. Schutte, “How to make the most of your human: design considerations for
humanmachine interactions,” Cognition, Technology & Work, vol. 19, no. 2, pp. 233249,
2017.
[9] B. ISO and B. STANDARD, “Ergonomics of human-system interaction,” 2010.
[10] S. Harrison, D. Tatar, and P. Sengers, “The three paradigms of HCI,” in Alt. Chi.
Session at the SIGCHI Conference on human factors in computing systems San Jose,
California, USA, 2007, pp. 118.
[11] H. Beyer and K. Holtzblatt, “Contextual design,” interactions, vol. 6, no. 1, pp.
3242, 1999.
[12] A. Hashizume and M. Kurosu, “Understanding user experience and artifact
development through qualitative investigation: ethnographic approach for human-centered
design,” in International Conference on Human-Computer Interaction, 2013, pp. 6876.
[13] R. L. Baskerville and M. D. Myers, “Design ethnography in information systems,”
Information Systems Journal, vol. 25, no. 1, pp. 2346, 2015.
[14] M. Galindo, C. Martinie, P. Palanque, M. Winckler, and P. Forbrig, “Tuning an
HCI curriculum for master students to address interactive critical systems aspects,” in
International Conference on Human-Computer Interaction, 2013, pp. 5160.
[15] U. Farooq and J. Grudin, “Human-computer integration,” interactions, vol. 23, no.
6, pp. 2632, 2016.
[16] G. Schirner, D. Erdogmus, K. Chowdhury, and T. Padir, “The future of human-in-
the-loop cyber-physical systems,” Computer, vol. 46, no. 1, pp. 3645, 2013.
[17] E. Lloyd, S. Huang, and E. Tognoli, “Improving human-in-the-loop adaptive
systems using brain-computer interaction,” in 2017 IEEE/ACM 12th International
Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS),
2017, pp. 163174.
[18] S. Huang and P. Miranda, “Incorporating human intention into self-adaptive
systems,” in 2015 IEEE/ACM 37th IEEE International Conference on Software
Engineering, 2015, vol. 2, pp. 571574.
[19] N. A. Stanton and M. S. Young, “Vehicle automation and driving performance,”
Ergonomics, vol. 41, no. 7, pp. 10141028, 1998.
[20] J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate
reliance,” Human factors, vol. 46, no. 1, pp. 5080, 2004.
[21] M. Gil, M. Albert, J. Fons, and V. Pelechano, “Designing human-in-the-loop
autonomous Cyber-Physical Systems,” International Journal of Human-Computer Studies,
Matthew Garvin 27
vol. 130, pp. 2139, 2019.
[22] K. H. Abbott and P. C. Schutte, “Human-centered automation and AI-Ideas,
insights, and issues from the Intelligent Cockpit Aids research effort,” 1989.
[23] C. Graeber and C. E. Billings, “Human-centered automation: Development of a
philosophy,” 1990.
[24] C. E. Billings, Human-centered aviation automation: Principles and guidelines.
National Aeronautics and Space Administration, Ames Research Center, 1996.
[25] T. C. Eskridge, D. Still, and R. R. Hoffman, “Principles for human-centered
interaction design, Part 1: Performative systems,” IEEE Annals of the History of
Computing, vol. 29, no. 04, pp. 8894, 2014.
[26] P. C. Schutte, “Wings: A new paradigm in human-centered design,” 1997.
[27] F. Figueroa, L. Underwood, B. Hekman, and J. Morris, “Hierarchical Distributed
Autonomy: Implementation Platform and Processes,” in 2020 IEEE Aerospace
Conference, 2020, pp. 19.
[28] D. D. Woods, “Learning From Automation Surprises and ‘Going Sour,’” Cognitive
engineering in the aviation domain, p. 327, 2000.
[29] N. B. Sarter, D. D. Woods, and C. E. Billings, “Automation surprises,” Handbook
of human factors and ergonomics, vol. 2, pp. 19261943, 1997.
[30] D. E. Cooke, “An overview of NASA’s intelligent systems program,” in 2001 IEEE
Aerospace Conference Proceedings (Cat. No. 01TH8542), 2001, vol. 7, pp. 73664.
[31] E. Smith, “Intelligent systems technologies for ops,” in SpaceOps 2012, 2012, p.
1294820.
[32] J. Zhang, V. L. Patel, K. A. Johnson, and J. W. Smith, “Designing human-
centered distributed information systems,” IEEE intelligent systems, vol. 17, no. 5, pp. 42
47, 2002.
[33] M. N. Russi-Vigoya et al., “Supporting Astronaut Autonomous Operations in
Future Deep Space Missions,” in International Conference on Applied Human Factors and
Ergonomics, 2020, pp. 500506.
[34] G. Salazar, “Development Considerations for Implementing a Voice-Controlled
Spacecraft System,” in 2019 IEEE International Symposium on Measurement and Control
in Robotics (ISMCR), 2019, pp. B3-1-1-B3-1–20.
[35] J. J. Marquez, S. Hillenius, B. Kanefsky, J. Zheng, I. Deliz, and M. Reagan,
“Increasing crew autonomy for long duration exploration missions: self-scheduling,” in 2017
IEEE Aerospace Conference, 2017, pp. 110.
[36] H. R. Moses, “Perspectives from the Wearable Electronics and Applications
Research (WEAR) Lab, NASA Johnson Space Center,” 2017.
[37] J. Blomberg and H. Karasti, “Reflections on 25 years of ethnography in CSCW,”
Computer supported cooperative work (CSCW), vol. 22, no. 46, pp. 373423, 2013.
[38] H. R. Bernard, Research methods in anthropology: Qualitative and quantitative
approaches. Rowman & Littlefield, 2017.
[39] L. A. Suchman, Plans and situated actions: The problem of human-machine
communication. Cambridge university press, 1987.
[40] C. Wasson, “Ethnography in the field of design,” Human organization, pp. 377
388, 2000.
[41] A. Crestadoro, The Art of Making Catalogues of Libraries: Or, a Method to Obtain
in a Short Time a Most Perfect, Complete, and Satisfactory Printed Catalog of the British
Museum Library. Literary, Scientific & Artistic Reference Office, 1856.
[42] M. A. K. Halliday and C. Yallop, Lexicology: a short introduction. A&C Black,
2007.
[43] C. Urquhart, H. Lehmann, and M. D. Myers, “Putting the ‘theory’back into
Matthew Garvin 28
grounded theory: guidelines for grounded theory studies in information systems,”
Information systems journal, vol. 20, no. 4, pp. 357381, 2010.
[44] M. Greene, R. Gonzalez, P. Papalambros, and A.-M. Mcgowan, Design Thinking
vs. Systems Thinking for Engineering Design: What’s the Difference? 2017.
[45] R. Buchanan, “Systems thinking and design thinking: The search for principles in
the world we are making,” She Ji: The Journal of Design, Economics, and Innovation, vol.
5, no. 2, pp. 85104, 2019.
[46] S. R. Hirshorn, L. D. Voss, and L. K. Bromley, “Nasa systems engineering
handbook,” 2017.
[47] G. A. Boy, Humansystems integration: from virtual to tangible. CRC Press,
2020.
[48] K. Chang, “For Artemis Mission to Moon, Nasa Seeks to Add Billions to Budget,”
The New York Times, 2019.
[49] A. Cooper, “The Inmates are Running the Asylum,” in Software-Ergonomie’99,
Springer, 1999, pp. 1717.
[50] D. Norman, “Ad hoc personas & empathetic focus,” The persona lifecycle:
Keeping people in mind during product design, pp. 154157, 2006.
[51] R. Dasgupta, R. Dasgupta, and Srivastava, Voice User Interface Design.
Springer, 2018.
[52] M. H. Cohen, M. H. Cohen, J. P. Giangola, and J. Balogh, Voice user interface
design. Addison-Wesley Professional, 2004.
[53] C. Pearl, Designing voice user interfaces: principles of conversational
experiences. O’Reilly Media, Inc., 2016.
[54] G. A. Miller, “The magical number seven, plus or minus two: Some limits on our
capacity for processing information.,” Psychological review, vol. 63, no. 2, p. 81, 1956.
[55] V. A. Olson, “American extreme: An ethnography of astronautical visions and
ecologies,” 2010.
[56] G. S. Hubbard, “Humans and robots: Hand in grip,” Acta Astronautica, vol. 57,
no. 28, pp. 649660, 2005.
[57] S. O’Keefe, “The vision for space exploration,” National Aeronautics and Space
Administration, 2004.
[58] B. R. Finney, “Anthropology and the Humanization of Space,” Acta Astronautica,
vol. 15, no. 3, pp. 189194, 1987.
[59] J. A. Dator, Social foundations of human space exploration. Springer Science &
Business Media, 2012.
[60] A. Hanson et al., “A model-based systems engineering approach to exploration
medical system development,” in 2019 IEEE Aerospace Conference, 2019, pp. 119.
[61] D. Council, “The ‘double diamond’ design process model,” Design Council, 2005.
Matthew Garvin 29
APPENDIX A - General Interview Protocol
Warm-up
1. Can you tell me a little about your academic/professional background?
a. What kind of skills do you gain in this kind of discipline?
i. So [x, y, z] are all hard skills, now can you take a moment to describe
some of the soft skills you’re developing through your academic program?
1. Can you tell me about a recent experience where these soft skills
were either put to the test, or on full display?
b. What career path are you planning to pursue?
i. Do you feel that you’re being adequately prepared for this career through
your academic program and internship experience?
ii. Any specific companies you are hoping to work for?
2. Can you tell me your story about how you first became interested in space?
a. What motivated you to pursue projects/careers follow-up in human spaceflight
technologies?
3. What is your role in [TEAM]?
a. What are the responsibilities involved in this role?
Main Interview
Could you describe the first NASA X-Hab/SUITS Challenge that you participated in as a
member of [TEAM]?
How did you hear about this challenge?
What problem is NASA trying to solve with this challenge?
What value do university teams bring to NASA with the X-Hab challenge?
Can you tell me who you’ve worked with from NASA?
Who have you met?
Could you break down the process that [TEAM] follows to contribute to the X-Hab/SUITS
challenge?
Is this process taught in any of your program’s courses?
Who worked on what?
What were the most challenging components?
How did you manage these challenges?
Could you describe how you identified what was needed to make the
system useful to NASA?
Can you tell me more about how systems engineers assess risk?
Who helped you?
What did they do?
What are the components of the project that you were most drawn to?
Why?
Were these easier because of your interest level?
Who else was drawn to these aspects of the project?
Could you describe how you come to understand how to support astronauts in space?
Matthew Garvin 30
How did the group figure out how to make a good UX?
What resources do you use?
Who do you talk to?
What is their level of understanding?
How do aerospace/space engineers think of user needs, preferences, and
comforts?
Is there a certain sort of process engineers follow when designing
for humans?
Can you tell me about the design evaluation process from the X-Hab challenge?
How do you make sure that the system you’re designing is effectively solving the
problem it’s trying to solve?
When did X-Hab/SUITS do the evaluation?
What is human-in-the-loop?
Is human-in-the-loop a requirement for NASA X-Hab\SUITS?
Besides HITL, what other kinds of evaluation methods are you familiar with in
terms of system interfaces for humans?
What value does HITL have to the overall project?
If HITL is tucked onto the end of the project after the system is fully functional,
what do you do if you find that the system failure rate is too high?
Can you tell me how you incorporated findings from the evaluation into
the next iteration?
Thank you for discussing the X-Hab/SUITS challenge with me. If I may, I’d like for us to
pivot now to talk about some other experiences you’ve had. Can you tell me about any
courses you’ve taken or projects you’ve worked on where you learned to consider the
human user’s perspective in the design of a system or software?
What kind of content was introduced?
Can you tell me what you remember of it?
Did you save any of these materials, in Drive or otherwise? And if so,
would you be willing to share these with me?
Wrap Up
I think I’ve asked all the questions I can think of right now. To recap some of the things you’ve
just told me - <summarize key takeaways from the interview for clarity>
Does that sound about right?
Did you notice anything that you would like to clarify?
Is there anything you would like to add that we didn’t cover?
Then I just want to thank you so much for talking with me. Learning about your perceptions and
experiences with [X-Hab/SUITS] will help me with my research.
Matthew Garvin 31
If you think of anything else or have any questions, please don’t hesitate to get in touch with me.
I’d also like to ask if I can contact you again for a follow-up interview if I think of other questions
or need further clarification, is that okay?
APPENDIX B - Keywords for KWIC coding
KWIC Rank
Keywords
Total
AVG
SD
1
Space 180
18.00
12.34
2
System 182
18.20
10.56
3
Engineering 141
14.10
10.00
4
People 146
14.60
9.50
5
User 95
9.50
8.87
6
Design
124
12.40
9.35
7
Team 111
11.10
8.06
8
Astronaut 85
9.44
5.96
9
Test 86
9.56
4.93
10
Requirement 80
8.00
4.85
APPENDIX C - Model-Based Systems Engineering Vee
Systems Engineering “Vee” [60]
Matthew Garvin 32
APPENDIX D - Double Diamond Design model
The design council’s Double Diamond model [61]
APPENDIX E - Boy’s model of Human Systems Integration
Human Systems Integration model [47]
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
While the concept of system has been part of design theory and practice from the beginning of the discipline, there is renewed interest today in the relationship of systems thinking and design. After reviewing the sources of ambiguity in the commonplace definition of the concept, we can identify four distinctly different interpretations of what a system is, how it operates, and the common purposes that the systems may serve. This leads to a consideration of the value and limitations of systems thinking and systems analysis for design. Systems analysis provides no clear identification of the problems that designers may address. Instead, design turns away from the complexity of situations and surroundings and toward the obstacles and problems faced by human beings in concrete situations, creating environments that may support and improve the quality of human experience. Design is the transformation of surroundings into environments for human experience. We have reached a point where it is important to begin a discussion about the principles of design and the environments that we seek to create.
Conference Paper
Full-text available
Design thinking (DT) and engineering systems thinking (EST) are two complementary approaches to understanding cognition, organization, and other non-technical factors that influence the design and performance of engineering systems. Until relatively recently, these two concepts have been explored in isolation from one another; design thinking methods have been applied to industrial design and product development, while engineering systems thinking is used in professional systems engineering practice and large-scale, complex systems design. This work seeks to explore the relationship between these two concepts, comparing their historical development, values, applications, and methods. The primary contribution of the work is a set of four concept models that depict plausible relationships between design thinking and systems thinking for engineering design.
Article
Even though full autonomy in Cyber-Physical Systems (CPSs) is a challenge that has been confronted in different application domains and industrial sectors, the current scenario still requires human intervention in these autonomous systems in order to accomplish tasks that are better performed with human-in-the-loop. Humans, machines, and software systems are required to interact and understand each other in order to work together in an effective and robust way. This human integration introduces an important number of challenges and problems to be solved in order to achieve seamless and solid participation. To manage this complexity, appropriate techniques and methods must be used to help CPS developers analyze and design this kind of human-in-the-loop integration. The goal of this paper is to identify the technological challenges and limitations of integrating humans into the CPSs autonomy loop and to break new ground for design solutions in order to develop what we call HiL-ACPS systems. This work defines a conceptual framework to characterize the cooperation between humans and autonomous CPSs and provides techniques for applying the framework in order to design proper human integration. The emergent autonomous car domain is considered as a running example. It covers some of the current limitations of involving drivers into the autonomous functionalities. Finally, to validate the proposal, an autonomous car prototype was built applying the conceptual framework. This prototype was evaluated to check whether the human integration implemented behaves as defined in its specification.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.