Conference PaperPDF Available

Conversational User Interfaces to support Astronauts in Extraterrestrial Habitats


Abstract and Figures

Long-term space missions are challenging and demanding for astronauts. Confined spaces and long-duration sensory deprivation may cause psychological problems for the astronauts. In this paper, we envision how extraterrestrial habitats (e.g., a habitat on the Moon or Mars) can maintain the well-being of the crews by augmenting the astronauts. In particular, we report on the design, implementation, and evaluation of conversational user interfaces (CUIs) for extraterrestrial habitats. The goal of such CUIs is to support scientists during their daily and scientific routines on their missions within the extraterrestrial habitat and provide emotional support. During a week-long so-called analog mission with four scientists using a Wizard of Oz prototype, we derived design guidelines for such CUIs. Successively, based on the derived guidelines, we present the implementation and evaluation of two CUIs named CASSIOPEIA and PEGASUS.
Content may be subject to copyright.
Conversational User Interfaces to support Astronauts in
Extraterrestrial Habitats
Ana Rita Gonçalves Freitas
Alexander Schülke
Simon Glaser
Pitt Michelmann
Thanh Nguyen Chi
Lisa Schröder
Zahra Fadavi
Gaurav Talekar
Jette Ternieten
Akash Trivedi
Jana Wahls
Warda Masood
University of Bremen
Christiane Heinicke
Center of Applied Space Technology
and Microgravity (ZARM)
Johannes Schöning
University of St. Gallen
Figure 1: Artistic rendering of the MaMBA extraterrestrial habitat on the Moon.
Long-term space missions are challenging and demanding for as-
tronauts. Conned spaces and long-duration sensory deprivation
may cause psychological problems for the astronauts. In this pa-
per, we envision how extraterrestrial habitats (e.g., a habitat on
the Moon or Mars) can maintain the well-being of the crews by
augmenting the astronauts. In particular, we report on the design,
implementation, and evaluation of conversational user interfaces
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from
MUM 2021, December 5-8, 2021, Leuven, Belgium
©2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-8643-2/21/05. . . $15.00
(CUIs) for extraterrestrial habitats. The goal of such CUIs is to sup-
port scientists during their daily and scientic routines on their
missions within the extraterrestrial habitat and provide emotional
support. During a week-long so-called analog mission with four
scientists using a Wizard of Oz prototype, we derived design guide-
lines for such CUIs. Successively, based on the derived guidelines,
we present the implementation and evaluation of two CUIs named
Computer systems organization Embedded systems
dundancy; Robotics; Networks Network reliability.
Interactive Spaces, Conversational User Interfaces, Design Guide-
lines, Extraterrestrial habitats, Mars and Moon Mission, Human
Space Exploration
MUM 2021, December 5-8, 2021, Leuven, Belgium Freitas and Schülke, et al.
ACM Reference Format:
Ana Rita Gonçalves Freitas, Alexander Schülke, Simon Glaser, Pitt Michel-
mann, Thanh Nguyen Chi, Lisa Schröder, Zahra Fadavi, Gaurav Talekar,
Jette Ternieten, Akash Trivedi, Jana Wahls, Warda Masood, Christiane
Heinicke, and Johannes Schöning. 2021. Conversational User Interfaces
to support Astronauts in Extraterrestrial Habitats. In 20th International
Conference on Mobile and Ubiquitous Multimedia (MUM 2021), December
5-8, 2021, Leuven, Belgium. ACM, New York, NY, USA, 10 pages. https:
The monotonous living conditions in space were identied as one
of the main causes of disturbances in the mental well-being of crew
members [
]. Recently, researchers also found impairments on
cognitive processes in participants who were immobilized and iso-
lated [
]. As a result, the subsequent isolation and frustration, as
well as the lack of a familiar environment and sensory input on
spaceships or outposts, could endanger the mission’s success and
the crew’s safety. In space missions that last a couple of months,
those challenges were often addressed by communication with
mission control and psychologists [
]. However, this is no longer
possible on a 2-3 year-long mission to Mars due to message trans-
mission times of at least 20 minutes in one way.
Interactive technologies are one critical success factor for such
challenging missions. From Ubicomp technologies in mission con-
trol centers [
] to autonomous robots [
] many dierent tech-
nologies are used today to support astronauts and scientists. The
crew module of the recently tested Dragon 2 is equipped with a
variety of interactive touch screens for the astronauts to control the
rocket. Nevertheless, not only the spacecraft are becoming interac-
tive spaces and the habitats that the astronauts will inhabit on their
increasingly long missions that last for multiple years. Interactive
technologies that augment the astronauts in maintaining mental
well-being and enriching their experiences are therefore as the basis
for future space missions.
We believe that CUIs can play in important role in this space as
communication is crucial for space missions [
]. NASA explored
in 2005 a voice enabled procedure browser for the International
Space Station (ISS) [
]. Today, astronauts at the ISS are being sup-
ported by CIMON, a CUI in space [
]. Its Russian counterpart, the
humanoid robot Fedor
, was also designed to assist the astronauts
on ISS. Such CUIs are supposed to assist astronauts on their chal-
lenging long-duration missions, in particular, to support them in
research-related tasks during space ight or planetary exploration
This paper presents the design, implementation, and evaluation
of two CUIs for extraterrestrial habitats. Extraterrestrial habitats
oer additional challenges for the design of interactive technol-
ogy within them. The rst habitat prototypes are very complex,
and astronauts will require support from mission control to fulll
their tasks and duties. Nevertheless, direct communication will
not (always) be possible (e.g., due to the moon’s shadow or long
distance to Mars). Therefore, we explore in this paper how CUIs
can enrich the astronaut’s experiences. To derive design guidelines
for CUIs for extraterrestrial habitats, we performed a Wizard of Oz
short for Final Experimental Demonstration Object Research:
4with a CUI within a prototype of an extraterrestrial
habitat called MaMBA (Moon and Mars Base Analog, see gure 1)
designed to serve humankind as an outpost on the Moon or Mars.
The MaMBA [
] project aims to build a rst functional extrater-
restrial habitat prototype. Following the argument of [
] that such
interfaces introduce a signicant change in information access, we
believe that it is essential to investigate how CUIs can complement
interactive spaces in extreme conditions. Therefore, we explore
which interfaces are suitable for extreme interactive spaces and
illustrate this by developing two CUIs. With three explorative user
studies, we found out that CUIs are heavily used in extraterrestrial
habitats compared to other environments such as living rooms or
oce environments. Besides helping answering mission-specic
requests, CUIs were also used for leisure and fun activities. From
these conclusions, we derive four design guidelines for CUIs in
space. Based on the guidelines, we report on the implementation
and evaluation with a total of 22 users of two novel CUI concepts
named CASSIOPEIA and PEGASUS (see gure 4). Both prototypes
were evaluated within the habitat. For each CUI prototype for the
evaluation, we focus on a particular aspect of the CUI. In the CAS-
SIOPEIA study, we show that an extension of the CUI with a GUI
helped the scientists to complete their tasks more eciently. Even
so, we measure no signicant dierences in terms of usability to
a CUI only condition of CASSIOPEIA. In addition, the placement
of such CUI within the habitat is important. With the PEGASUS
study, we show that a CUI can ease the work tension and promote
the well-being of the crew members by creating an emotional layer
in conversation.
We contribute to the exploration of how research in HCI and the
fast-developing eld of space exploration can stimulate each other,
as we envision a signicant impact of HCI research for the success
of future space missions (we refer to the crew module of Dragon 2
as an example). We are doing so by developing two novel CUI to
assist the astronauts with their daily tasks and duties and enrich
the UX in the space habitat. Following a user-centered approach
with three user studies and the implementation of two novel CUI
prototypes, we gained rst knowledge of how to design CUIs for
extraterrestrial space habitats. Examining our work presented in
this paper with the lens of Oulasvirta et al. [
], we believe that
our paper provides solutions to the conceptual and constructive
problems when it comes to the development of CUI in extreme
interactive spaces (e.g., an extraterrestrial habitat) and can inspire
other researchers when developing CUIs for extreme environments.
Oulasvirta et al. [
] describe scientic progress when developing
novel technologies (including related elds) as improvements in
the ability to solve signicant problems related to human use of
computing. Extending Laudan’s philosophy of science [
], they
dened HCI research as problem-solving, where the solution re-
sults in an improvement to problem-solving capacity. They describe
that problems can be empirical, conceptual, or constructive, and
the solution can be dened in terms of signicance, eectiveness,
eciency, transfer, and condence. We believe that novel technolo-
gies such as CUIs can play an essential role in mission success as
they can provide emotional support or better assist the scientists
in following their often very complex routines during the missions.
On top of these contributions, we expect our research to apply
outside this "lead user market" to novel technologies in terrestrial
Conversational User Interfaces to support Astronauts in Extraterrestrial Habitats MUM 2021, December 5-8, 2021, Leuven, Belgium
environments. Within MaMBA, we have the unique opportunity to
research how human-centered methods perform in these extreme
The paper is structured as follows. First, after discussing related
work, we describe a Wizard of Oz study and the derived guidelines.
Based on that, we present the implementation of two novel CUI
concepts named CASSIOPEIA and PEGASUS. This is followed by
two evaluations of the systems, results, and discussion. Finally, the
last section provides concluding remarks and an outlook on future
In this section we discuss the related work in the areas of mental
health of astronauts and the use of CUIs in space A good overview
on recent advances on research in CUI from various perspectives
can be found in [13].
2.1 Potential Measures to Support Astronaut
Well-being during Long-term Missions
Close cooperation between astronauts and ight surgeons, who are
qualied to identify symptoms of possible behavioral problems, is
essential in space missions [
] to maintain a good mental state
for the astronauts [
]. Moreover, the certain possibility of a quick
return to Earth at any time in near-Earth-missions, as well as the
almost instantaneous communication between the astronauts and
mission control are two additional important positive factors [47].
Together with professional psychological care, the astronauts of
the ISS are often provided with packages from family and friends
to be emotionally supported during space missions [
]. However,
future space missions to Mars will require the astronauts to live
in conned spaces for even longer periods of time (e.g. several
years). In addition, communication with mission control will be
limited due to time delays of up to around 20 minutes [
] one
way. Thus the time between a message sent and the corresponding
response is around 40 minutes. We expect such factors to decrease
the social interaction between crew and mission control even more
which acts as a negative factor on the mental health of the astro-
nauts. Constant mental pressure, especially when the astronauts
are confronted with failure, could endanger the missions and put
their success at risk. First experiments were already conducted to
study these eects in so-called analog missions to nd out how
to prevent human dysfunction during space missions. Research
into technology that compensates for the lack of communication
has already been conducted during those tests of human isolation
in extra-terrestrial environments, which have all taken place on
Earth so far [
]. Technologies have already proven themselves to
be of great benet in analog missions when used purposefully. The
Mars 500 project simulated a 520 days lasting isolated Mars mission
in a closed facility near Moscow [
]. In this mission, the six as-
tronauts tested a technological system called EARTH, with which
the astronauts could immerse themselves fully in several dierent
relaxing digital environments like nature scenes, landscapes and
parks. Another mission in an extreme environment where technol-
ogy has proven to be benecial is the HI-SEAS missions, which
regularly take place in Hawaii. Six long-term missions have been
conducted so far, although the last one had to be aborted after four
Figure 2: Wizard of Oz CUI, with which the four scientists in
the extraterrestrial habitat prototype spent a week perform-
ing their experiments (left). Three of the fours scientists in
the laboratory module of the habitat during the Wizard of
Oz Study (right).
days because of an electric shock accident [
]. During one mis-
sion, scientists deployed ANSIBLE, a virtual world that allowed the
participants and their families to interact with each other [57].
2.2 Voice Assistants in Space
CUIs such as Google Assistant, Amazon Alexa, and Siri are some
of the most ubiquitous articial intelligent systems that have been
incorporated into our everyday lives. Despite having such a long de-
velopment history [
], CUIs just recently became more and
more ubiquitous [
] with the advent of devices like Google Home,
Amazon Echo, and Apple HomePod in our living rooms. Even
though guidelines for CUIs were developed decades ago [
the interaction with these devices is still complicated [
therefore, general design guidelines for human-computer interac-
tion (HCI) need to be improved [
]. Recent research in the
Ubicomp domain has been focusing on increasing learnability of
CUIs [
], on designing CUIs to help people with overcoming prob-
lems caused by such interfaces when they arise [
], and on study-
ing the dierences between CUIs and classical graphical user in-
terfaces (GUIs) to obtain the design patterns and optimise CUIs’
eectiveness [
] or also on how to use CUI to assist in lab work [
Similarly, Murad et al. [
] discuss a rst taxonomy of design guide-
lines for hands-free speech interfaces. While the studies on under-
standing the ways people interact with CUIs in everyday scenarios
are very recent [
], [
], [
], [
], [
], there is only a small
amount of research on how CUIs can be used in other (more ex-
treme) environments. As previously stated, a notable example of a
conversational robot in space is CIMON, which was installed in the
ISS for 14 months. CIMON had been provided with a clearly dened
character and was congured with three dierent personality traits:
serious, assisting and friendly, with the goal of being identied not
only as an assistant [5].
To identify the requirements of a CUI for an extraterrestrial habi-
tat, a Wizard of Oz [
] experiment was conducted within one
week in June 2019 inside the laboratory module of a MaMBA space
habitat (see gure 2). The habitat is developed at the Center of
Applied Space Technology and Microgravity (ZARM) in Bremen,
Germany and comprises six connected but independent modules. In
its nal state, the habitat is meant to test crucial technologies such
as life support, power systems, and remote communication [
]. A
MUM 2021, December 5-8, 2021, Leuven, Belgium Freitas and Schülke, et al.
mock-up of the rst module — the laboratory module — was used as
the testbed for the study. The laboratory module is considered to be
the center of the scientic habitat and is designed for simultaneous
work of up to four scientists. Its oor layout is that of a circle with
a diameter of 4.40 m which is divided into 18 segments. At the
moment, the module is stand-alone but in its nal design, it will
feature three connections to the remaining modules of the habitat.
There are three distinct working areas and one additional glove
box (see gure 2). Typical lab work involves analyses of geologi-
cal and biological samples and the preparation and conduction of
experiments under conditions found in a lunar laboratory. The labo-
ratory provides us with an ideal testbed as it allowed us to test CUIs
with scientists in an extraterrestrial habitat prototype. During our
Wizard of Oz study, a crew of four scientists performed biological,
geological, and materials science experiments in the module over
one week. The aordances of such CUIs in extraterrestrial habitats
dier from the ones in a space station orbiting the Earth (e.g. ISS).
The Wizard of Oz prototype of the CUI consisted of a Raspberry Pi
running RaspbianOS with speakers and a microphone (see gure 2)
placed in a box beside the main door, 1-3 meters away from the
main working areas and a 3D printed gurine on top. The data from
the microphone is transmitted to the wizard interface running on a
laptop outside the habitat. The wizard can respond to the requests
of the scientists by either using predened templates oered in the
interface or, in the case there is no template available, by inserting
messages manually that are converted to speech through the SVOX
Pico text-to-speech library.
3.1 Participants
Four scientists (2 females, 2 males, between 28 and 43 years old, 34
years old on average) were selected to prepare experiment protocols
before the test run and to perform those experiments during the
test run. All scientists are researchers with signicant experience
in laboratory work on space-related topics; three of them have a
PhD in their respective eld and the other one is an advanced PhD
3.2 Procedure
The test run was divided into nine sessions, each about typically
2.5h long (with some variation: shortest was just slightly over an
hour). All participants were briefed in the use of the habitat labora-
tory and safety protocols on the day before the rst session. On the
nal day, after the last session, the scientists lled in questionnaires
and were interviewed by one of the authors of this study. The sci-
entists answered questions related to the usability and usage of the
CUI as well as a larger catalog of mission specic questions. We
logged all interactions (activations of the CUI as well as transcript
the dialogues) with the Wizard of Oz prototype of the CUI and the
sessions were recorded on video using two video cameras installed
in the habitat. Using an iterative coding process, two independent
coders transcribed and categorized all conversations into a schema
described later in the results section. We dened an activation as a
request initiated by either the scientists or the CUI, while a conver-
sation is an activation which has at least one follow-up interaction.
Unlike other commercial CUIs, our Wizard of Oz prototype could
also initiate conversations; nevertheless, 50% of the activations were
Figure 3: Categories of conversations with the Wizard of Oz
prototype over the study period.
initiated by users, proving that the Wizard of Oz prototype was
also proactively used by all the scientists.
3.3 Results
The data collected from the nine sessions stretches over roughly 20
hours. Within the nine sessions, we observed a total of 84 activations,
out of which 75 were considered as conversations. The duration of
the conversations largely varied. While the shortest conversation
lasted just about 5 seconds (consisting of a simple command and
execution), the longest one took 18 minutes (troubleshooting the
VPN access). The median conversation duration was approximately
50 seconds.
(1) Mission-specic questions:
Details about the experiment protocols (conversations
related to the scientic experiments the scientists per-
Equipment utilized in each experiment (e.g. requesting
the status or detailed information of a specic piece of
equipment in the habitat)
Chemical reagents required for each experiment (e.g. re-
questing information regarding their composition or other
Troubleshooting requests (e.g. rebooting a system or com-
(2) General & logistical questions:
General (conversations that contain a question that the
CUI could solve by inquiring the internet or its knowledge
Logistical (logistical questions regarding the Wizard of Oz
Miscellaneous chats (conversations with the CUI with no
clear goal or task to complete, e.g. playing music)
Reminders (conversations triggered by the CUI with alerts
to the scientists)
Conguration of settings of the extraterrestrial habitat (e.g.
switching or dimming on the lights or controlling sensors in
the habitat)
Adjustment of settings of the CUI (e.g changing the volume
of the speakers)
Figure 3 shows the distribution of the conversations for each
category. While mission-specic questions cover a major part of
the total requests (accounting for 52% of the total conversational
Conversational User Interfaces to support Astronauts in Extraterrestrial Habitats MUM 2021, December 5-8, 2021, Leuven, Belgium
duration or 34% of the total number of activations), non-mission-
related requests e.g. playing music or simply chatting with the CUI
were also actively used. The troubleshooting conversations took
the longest (one was the eighteen-minute conversation mentioned
above), whereas reminders and conguration requests were rather
short (around 10 to 15 seconds). The qualitative feedback from the
interviews with all four scientists was that the CUI was fun to use
and very helpful (“all my questions were answered”). The crew
members also stressed that the CUI was very helpful when working
with gloves. They unanimously agreed that CUIs would be useful
in a “real” mission, as they could be operated hands-free while
performing the experiments in the extraterrestrial habitat. The
participants also proposed to have multiple dierent CUIs in the
space habitat; e.g. a personal companion or sidekick and a general
main intelligent CUI to support the whole crew and mission.
Based on the study and data presented in the previous section,
we derived the following four main design guidelines for CUI in
extraterrestrial habitats following the approach described in [42]:
4.1 Mission-Specic & Non-Specic Requests
During the study, one of the very rst activations by one of the
scientists was a request to play some music for entertainment, while
the scientists were setting up their experiments. Many other non-
specic requests were made by the astronauts during the study
period. Thus, we believe, it is important to design not only for
mission-specic requests but also for non-mission-critical tasks.
4.2 Active Use
The CUI was very actively used during our study by all four scien-
tists. The study lasted solely a week, however, we did not witness
any drop in usage over that period. Therefore, when designing CUI
for extraterrestrial habitats, it should be noted that such interfaces
are used more often compared to similar interfaces in other con-
texts. In addition, it is important to consider that the conversations
also lasted longer compared to CUIs in other contexts [
]. While
this could be an artifact of the study design, we believe that the next
generations of CUI should enable more sophisticated conversations,
not only in extraterrestrial habitats but also in all other contexts.
4.3 Oering Multiple Modalities & Two-Way
It is useful to have means of interaction that allow continuing the
conversation using other modalities than speech. For example, dur-
ing the eigtheen-minute-long conversation, it would have been
helpful to show some data on a screen attached to the CUI or send
the scientist further information e.g. via mission control. Schaf-
fer [
] also recently argued that this makes CUIs more robust.
Moreover, compared to other usage contexts of CUIs, it also proved
useful as a way for mission control to ask the astronauts to perform
some tasks in the extraterrestrial habitat (e.g. restarting a computer
that mission control had lost connection to, etc.).
Figure 4: The CASSIOPEIA (left) and PEGASUS (right) proto-
4.4 CUI As a Team Member
Lastly, it is important to consider, that the scientists quickly adapted
the CUI as a team member. The CUI can play a critical role in sup-
porting astronauts on their often long and challenging missions [
It should also be mentioned that the scientists largely interacted
with the CUI as they would with another person, most notably
acknowledging receipt of a message by thanking him.
Based on the guidelines, we designed and implemented the two
CUIs named CASSIOPEIA and PEGASUS for use in an extraterres-
trial habitat as also proposed by the scientists from the Wizard of
Oz study. Both were designed to cover all four design guidelines,
but CASSIOPEIA was designed to support running experiments
(focusing on the third guideline), PEGASUS was designed more as a
personal companion that helped each astronaut throughout the day
(focusing on the fourth guideline) as recommended in the study as
Unlike most current commercial systems that have been oering
features that handle all of the non-critical mission requests, the
proposed system — CASSIOPEIA — focuses on providing mission-
specic assistance during executions of scientic experiments in an
extraterrestrial habitat similar to [
]. Routinely throughout a mis-
sion, scientists are required to strictly follow predened experiment
protocols that typically include a short description of the experi-
ment, a detailed list of the required reagents and equipment, and
a step-by-step explanation of how to fulll the experiment. These
mandatory protocols may demand the registration of a report at
the end of each scientic experiment by the scientist(s) involved,
and optionally, the report can be sent to mission control or saved
locally on the system once nalized. Besides acting as an experi-
ment assistant, CASSIOPEIA also plays the role of a central system
that stores all public information of the habitat, i.e. crew member
list, experiment description, reagents, equipment, and instructions.
This data is accessible and controllable by mission control, there-
fore, CASSIOPEIA also provides a communication channel between
mission control and the habitat. The proposed CUI is able to handle
all experiment-related requests coming in a variety of duration
MUM 2021, December 5-8, 2021, Leuven, Belgium Freitas and Schülke, et al.
Figure 5: System architecture with I/O channels and core el-
ements of CASSIOPEIA (bottom) and PEGASUS (top).
and complexities. In order to enable that, CASSIOPEIA utilizes a
state machine to keep track of the ongoing process of selecting
and following the step-by-step description of a procedure in or-
der to ensure that the user stays on the correct path. Based on
the current state, the system is able to predict and adapt its guid-
ance to the crew members’ intentions so as to minimize the crew
members’ confusion and frustrations when interpretation problems
arise. As mentioned in the previously presented design guidelines,
incorporating multiple modalities in CUIs can enhance the quality
and experience of the human-computer interaction [
]. Therefore,
CASSIOPEIA was developed to allow interaction with a GUI. The
GUI provides clear suggestions of the available commands that are
accessible in each stage of the interaction through subtle visual
cues, which helps to guide the crew members through the process
as well as to familiarize them with the eligible operation syntax
and options. As suggested by Zamora [
], voice interaction shows
most potential in cases where the user’s hands are busy and there-
fore cannot interact with a GUI, which is the case for executing
experiment-related activities. On the other hand, typing is consid-
ered to be more appropriate for tasks that require a higher level
of detail, hence the system should oer the possibility to switch
between voice-only and voice-and-visual modes to accommodate
specic situations.
In contrast to the central system CASSIOPEIA, PEGASUS (Per-
sonal Guiding Sidekick) is an interactive, helpful and emotional
support companion. To a certain degree, it is an attentive system
by remembering what it was told. Thus it is able to refer to the
personal information and interests of the users to make the dia-
logue more personal. PEGASUS is embodied by an Apple Watch 5th
edition and always displays one out of 16 dierent faces to support
its responses and statements towards the user. The smartwatch is
mounted onto a 3D gurine. The smartwatch shows dierent emo-
tional facial expressions during conversations, such as happy, sad,
laughing, surprising [
]. Physical and visual cues make the user
more emotionally expressive and the gurine is perceived friendlier
by the user and aides to benet the crew member personality of
the PEGASUS system [11],[14].
5.3 Implementation
An overview on the architecture of CASSIOPEIA and PEGASUS
can be seen in gure 5. Both use the same underlying technical
system and can share building blocks. For the experiments CAS-
SIOPEIA and PEGASUS were designed to work independently from
each other. CASSIOPEIA runs on a Raspberry Pi using one input
channel (audio through the built-in microphones of the Raspberry
Pi) and two output channels (audio through the built-in speaker of
the Raspberry Pi and visual through a monitor connected to the
Raspberry Pi). An overview of the system’s architecture can be
seen in gure 5 and a photo of the CASSIOPEIA system can be seen
in gure 4. The audio output is created using an on-device text-
to-speech library called Hermes, while the visual output is oered
simultaneously through the GUI displayed on a connected monitor.
We strive for a seamless hybridization between the two output
channels so as to minimize the latency between the progress indica-
tor on the GUI and the narrative guidance. CASSIOPEIA processes
audio inputs using the natural language processing engine SNIPS
( The platform allowed us to create custom intents
to train a speech model by submitting specic example sentences.
SNIPS was discontinued on 31st January, 2020. However, due to
the fact that all intents and speech model were created before that
date, it is still possible to use CASSIOPEIA with any Raspberry Pi.
Using the trained model, the system is able to detect intents from
voice commands and once an intent is recognized, CASSIOPEIA
runs a corresponding applet (designated as action code in gure 5).
Each intent’s action code will be executed to perform a certain
task. The connection to the necessary services (e.g. databases, GUI
synchronizer) will be initiated from the action code when required.
Identical to CASSIOPEIA, PEGASUS makes use of the SNIPS frame-
work to recognize natural language and user intents. Action codes
associated with the dierent intents are executed and send text and
face instructions to the Apple Watch via a web server. The user can
interact with the CUI by saying the wake-word of the system, fol-
lowed by a command. The audio is recorded via a microphone. The
response is played by the smartwatch as soon as the user’s intent
is recognized and the face gets updated. As long as the dialogue
expects another response from the user, the watch sends a signal
after nishing its speech instructions in order to listen again.
The goal of the study was to investigate the eects of multi-modal
interaction with CASSIOPEIA through an additional GUI. We in-
vestigated how it can help to guide scientists through experimental
6.1 Participants
To examine the interaction between the user and CASSIOPEIA,
we conducted a between-subject user study with 12 participants
(2 female, 9 males, and 1 non-binary; between 21 and 32 years of
age) inside the laboratory module of the habitat in February 2020.
All of them were familiar with commercial voice assistants such as
Google Assistant or Amazon Alexa, yet only approximately 25% of
the participants shared that they had much experience with such
Conversational User Interfaces to support Astronauts in Extraterrestrial Habitats MUM 2021, December 5-8, 2021, Leuven, Belgium
6.2 Procedure
The participants were randomly assigned into two groups to per-
form the same laboratory experiment procedures with either of the
following setups: the participants used the CASSIOPEIA system
with (CUI+GUI) or without (CUI-ONLY) an additional GUI, but the
amount of information was consistent within both conditions. For
both groups, the system was positioned in a central location within
the laboratory module, that could easily be engaged with from any
part of the module. The GUI was placed on a swivel mount next
to it. Furthermore, the participants were not informed about the
experiment procedure and the equipment and material in advance.
They were instructed on which scientic experiment to select only
before starting the interaction with our system. At this moment,
they started the interaction by waking up the system with the
command “Hello” which would give them a short introduction of
what the system can do and some of the possible commands they
can use to advance in the process. After completing the task, the
participants were asked to ll out a questionnaire and take part in
the follow-up interview.
To measure the usability of our assistant we used the System
Usability Scale (SUS) [
] and the Post-Study System Usability Ques-
tionnaire (PSSUQ) [
]. In addition, the PSSUQ can be divided into
sub-scales which evaluate system usefulness, information quality
and interface quality. To further analyze the user expectations and
behaviors we carried out one follow-up interview session after each
6.3 Results
As one participant was not able to complete the experiment we only
report on the feedback of the 11 successful participants. The total
time spent on each experiment diers signicantly between CUI-
ONLY and the CUI+GUI condition (p=0.028) using paired samples
t-tests. The experiments were accomplished faster on the CUI+GUI
92 seconds) in comparison to the CUI-ONLY (863
270 sec-
onds). The CUI-ONLY-Group showed a much higher divergence
than the CUI+GUI-Group, which suggested that the inuence of the
CUI-ONLY on users’ adaptability and performance was lower than
that of the CUI+GUI. Users of the CUI-ONLY condition had to call
the assistant signicantly more often than those of the CUI+GUI
condition (p=0.02). Users of the CUI-ONLY-group asked the assis-
tant to repeat 3.2 on average, while users of the CUI+GUI condition
only asked to repeat one time.
The CUI-ONLY condition has a slightly higher SUS-Score (M=74.6,
SD=18.7) than the CUI+GUI Condition (M=73.5, SD=15.0). Both
scores signify a good usability [
]. A paired samples t-test showed
no signicant dierence between the SUS scores of the two condi-
tions (p=0.852). The average PSSUQ score for the CUI-ONLY con-
dition is 5.1 (SD=1.5) and for the CUI+GUI condition 5.4 (SD=1.2).
Again, testing revealed no signicant dierence between the aver-
age PSSUQ scores in both conditions (t(4)=-0.345, p=0.748). Both
scores indicate that the system usefulness is slightly better than
aneutral system. The evaluation of the sub-scales of the PSSUQ
yielded no signicant dierence between the two conditions as well
as no signicant deviation from the calculated overall PSSUQ score.
During the experiment, most of the participants were able to
adapt to the system and thus found it increasingly easy to interact
with it over time. Overall, only 2 participants stated to be over-
whelmed by the amount of information given at once and would
have preferred smaller steps. In the interviews more than half of
the participants of the CUI-ONLY stated that the available com-
mands were intuitive and clear and 7 out of 11 participants found
the system easy to use or becoming easier to use over time. 3 out of
6 participants that worked with the CUI+GUI prototype, remarked
that the GUI should show images or gures to make certain steps of
the experiment clearer or as a support in case a participant does not
know a certain word. As described above we left the information in
both conditions the same to ensure comparability. It is interesting
to note that the placement of the CUI was of importance for the par-
ticipants. 7 out of 11 participants would have preferred the system
to be in front of them rather than next to them. As the lab space
is very restrictive in the extraterrestrial habitat, participants had
more freedom within the CUI-ONLY condition and in the CUI+GUI
condition. It is also noteworthy that one participant with the lowest
experience with voice assistants from the CUI-ONLY-group had
the most trouble interacting with CASSIOPEIA. The participant
with the lowest voice assistant experience from the CUI+GUI group,
however, had a very good experience. This indicates that inexperi-
enced users could prot from having a GUI combined with a voice
Within the second study, our goal was to investigate if a charac-
terized PEGASUS prototype can help to establish an emotional
connection between the CUI and the user. We performed a user
study to explore the eect of a characterized PEGASUS version (CH)
compared with a non-characterized version of PEGASUS (NCH) on
the users mood and composure during an experiment. While the
responses of the characterized CUI aim to simulate the behavior
of a crew member, the non-characterized CUI only responds in an
objective manner and provides information fast and eciently. The
experiment required each participant to actively inquire about the
next steps from the CUI.
7.1 Participants
We conducted a between-subject user study with 10 participants (5
female, 5 males; between 23 and 30 years old) inside the laboratory
module of the habitat in late February 2020. The group of partici-
pants was dierent from the previous two experiments, but more
than half of the participants had previous experience using voice
7.2 Procedure
The participants were asked to build a small electromagnet within
our space habitat guided by PEGASUS. The assembly involved
many small and tedious steps so that the participants were forced
to interact with PEGASUS. In contrast to the rst experiment, the
CUI was placed on the workbench of the participants next to the as-
sembly area. In the CH condition the CUI always provided answers
in respect to its character model and added additional comments
to help the participants to carry out the experiment and loosen up
their mood. Furthermore, the companion adapted its facial expres-
sion. We again used the Post-Study System Usability Questionnaire
MUM 2021, December 5-8, 2021, Leuven, Belgium Freitas and Schülke, et al.
] as well as NASA-TLX [
] as quantitative measures
as well as the short-form of the positive and negative aect sched-
ule (PANAS)[
]. This was followed by semi-structured interviews
after completion of the task.
The conversations between PEGASUS and the participants were
transcribed to categorize dierent verbal cues, such as acknowledg-
ments, initiating conversations, recalling conversations, expressing
past content, excuse/apologize, joke, praise and small talk. Calcu-
lation of relative values, such as the percentage of requests in the
form of questions or the count of "thank you" was done per total
inquiries [18].
7.3 Results
The extent to which the participants felt excited was signicantly
dierent between the two conditions according to a Wilcoxon-
Mann-Whitney test (
022) (see gure 6,right). While
this also resulted in a higher task load while using the CH version
,SD =
3) we could not show signicant dierences with a
Wilcoxon–Mann–Whitney test (
= 0.2948) compared to using the
NCH version (29
,SD =
09). In the CH condition, PEGASUS caused
wide spread task load results compared to the NCH version with a
minimum value of 5
0and a maximum value of 84
0. Participants
with a high emotional connection to PEGASUS also rated the CUI
more sympathetic. The experience with voice assistants, however,
did not inuence the sympathy towards the CUI. Unlike the condi-
tion, which did not inuence the sympathy towards the CUI, the
stress level of the participants shows a correlation with sympathy
043). The emotional connection with the CUI shows such
a correlation as well (
037). Figure 6 (left) visualizes these
results and shows that low emotional connection and sympathy
tend to result in higher stress levels during the experiment. Since
the groups’ sample sizes are small, the result should be treated
carefully. Such a correlation cannot be observed by comparing the
emotional connection with the NASA TLX workload. The inter-
view sessions gave useful insights into how the participants felt
during the whole process. There was a divided response among
the participants. While some of them liked how the CH version
was friendly towards them and asked personal information from
the user to get to know them, some of them felt uncomfortable
answering questions about themselves. One participant said, "I felt
awkward telling my personal information to a digital entity". The
majority of the participants regarded the physical body and the fa-
cial expressions as "appealing" and "attractive", which helped them
to connect with the voice assistant better. Another participant said:
"The gure was neat, I thought it’s a penguin shape. I enjoyed how
it looked with all the pleasant expressions and the tangible body."
During the interview sessions it became clear that opinions on the
characterized CUI can vary concerning each individual. However,
the dierence in participants’ experiences with voice assistants did
not show any signicant inuence.
We performed two user studies with very diverse CUIs within a
laboratory module of an extraterrestrial space habitat. Most partici-
pants liked both CUI concepts and were able to complete the tasks
given to them in both experiments. We focused to investigate eects
Figure 6: Correlations between questionnaire responses
(left). PANAS excitement for the CH (red) and the NCH ver-
sion of PEGASUS (green) (right).
of multimodality and emotional connection with CASSIOPEIA and
PEGASUS rather on reducing crew time [
]. The number of test
subjects is always somewhat smaller in experiments under "more
extreme conditions." Therefore, the quantitative results are only
valid to a limited extent. Nevertheless, we think that the iteration
of the prototypes over a total of 3 studies provides insights into
how CUIs can be designed for space habitats.
Even though the participants of the CASSIOPEIA CUI-ONLY
condition took signicantly longer to complete the experiment, we
saw no dierence regarding the SUS and PSSUQ. Participants of the
CUI-ONLY condition also had to call the system signicantly more
often. This indicates that even though the perceived usability does
not dier between the conditions, the CUI+GUI version is more
ecient. The interview provided many useful insights into how to
improve our voice assistant. On the one hand, information provided
by the system should be presented concisely to prevent the user
from being overwhelmed by it. On the other hand, the user needs
the option to gain more detailed information if needed. Using a GUI
in combination with a CUI seemed to be a more eective way to
conduct scientic routines and the participants reinforced by their
feedback our design guidelines from the Wizard of Oz study.
In the future, oering detailed information about the equipment
and reagents in the laboratory and habitat could be helpful for such
an assistant system. A future assistant should provide as much
information as possible. However, it should be noted that all the
additional information provided should be optional so that the user
will not be overwhelmed by it. It is also important to note that in
the habitat laboratory’s small space, the placement of CASSIOPEIA
was crucial for the participants. Therefore, we also recommend
having one central system as CASSIOPEIA and complementing it
with a more personal system such as PEGASUS.
Considering the excitement and the level of interest among the
participants for PEGASUS, the deployment of such a CUI in space
missions shows the potential to provide incentives for the astronaut
and thus prevent boredom and monotony. However, it is necessary
to consider the novelty eect when preparing for this kind of sce-
narios and further research has to be conducted to generate more
knowledge regarding this matter. CUIs with embodiment and facial
expressions are still rare since the commercial market provides
mostly assistants without visual clues that could provide an emo-
tional layer in their responses. It turned out that all participants
rated both versions of the system as very interesting. When evalu-
ating the study, it is noticeable that there seems to be a connection
between how stressed users were and how sympathetic the user
Conversational User Interfaces to support Astronauts in Extraterrestrial Habitats MUM 2021, December 5-8, 2021, Leuven, Belgium
rated the system. The statement is being supported by the fact that
the emotional connection shows a signicant correlation as well.
This aspect requires further research with larger test groups to
assure its validity. However, a positive eect on astronauts’ stress
level through technology would represent a convenient method to
contribute to the astronaut’s mental well-being.
One crucial question that remains unanswered in this context is
how or when the emotional connection with the CUI gets created.
Our data suggest that sympathy and an emotional connection do
not arise from the mere characterization of the CUI. Other studies
in this area suggest personality tests that are carried out in advance
with each test person [
]. To achieve an emotional connection with
the CUI, the system’s character prole could then be adjusted to a
personality matching the participants’ personality.
In this paper, we described the user-centered design process of CUIs
for extraterrestrial space habitats. Within three studies, we derived
guidelines for such systems and implemented and evaluated two
prototypes. With the rst week-long Wizard of Oz study on the use
of a CUI in a lunar habitat and by the qualitative feedback of the sci-
entists, we derived four guidelines for the development of CUIs for
extraterrestrial habitats. Based on that, we presented the implemen-
tation of such CUIs named CASSIOPEIA and PEGASUS by taking
the derived guidelines as a basis. We performed two additional
studies to investigate the eects of multimodality and emotional
connection with these systems with both prototypes. Our results
can inspire other researchers working on CUIs in other domains,
especially in more extreme environments. Well-designed CUIs can
help to increase mission success and enrich the experiences of the
astronauts on those missions.
In the future, we will perform an additional study comparing
CASSIOPEIA against the Wizard of Oz prototype to evaluate the
proposed guidelines during a more extended mission (e.g., similar
to the Wizard of Oz study period). We will also focus on the impacts
of a CUI as a team member, as the lunar habitat with its small spatial
footprint is challenging for teamwork. It could also be interesting
to implement a personal CUI to serve astronauts’ psychological
and emotional needs better. We believe, as Bentley et al. [
] also
noted recently, that it is necessary to carry out more long-term
studies to test CUIs in very dierent environments [
] in order
to adapt them to the needs of the users. We also believe that those
systems are suited to support Parastronaut as well [
]. Surviving
in an extraterrestrial environment with limitations in real-time
communication and personal isolation can be a severe challenge.
We believe that turning those environments into interactive spaces
can help to protect the astronaut’s well-being. PEGASUS, a friendly
CUI companion to ease the work tension and to promote the well-
being of the crew members, can help by creating an emotional
layer in conversation. Our studies suggest that the deployment of a
characterized CUI might require a prior matching process between
the test subject’s character and the provided character simulation
to benet from it. According to our study, those benets can range
from excitement and interest to signicant drops in stress levels.
For astronauts, these might be important factors to support their
psychological well-being during their missions. Today, the euphoria
for space exploration has awakened again, with plans for colonizing
the Moon and Mars [
]. However, a human-crewed mission to
Mars would require about 2.5 years or more. Consequently, it is even
more important to actively support and augment the mental well-
being and experiences of the astronauts as such missions can be
extremely monotonous and cause cognitive decline, social isolation,
and frustration. Interactive technologies such as CUIs can be a part
of the solution to these problems.
The project is funded by the Klaus Tschira Stiftung gGmbH.
The authors would like to thank M. Arnhof, L. Orzechowski, M.
Stadtlander, R. Mairose, P. Prengel from ZARM, and the student
team of the HCI lab at the University of Bremen funded by the
Volkswagen foundation through a Lichtenberg professorship that
helped set up the habitat laboratory for their contributions to the
project. In addition, the authors would like to thank Y. Nahas for
his contribution to his particular study.We would especially like
to thank all the students and co-authors from the master project
"Living on Mars" for carrying out the development of the CUIs and
the studies. This work is open access thanks to the University of St.
Gallen, Switzerland.
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi,
Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al
Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference
on Human Factors in Computing Systems. ACM, 3.
Max Bajracharya, Mark W Maimone, and Daniel Helmick. 2008. Autonomy for
mars rovers: Past, present, and future. Computer 41, 12 (2008), 44–50.
Aaron Bangor. 2009. Determining What Individual SUS Scores Mean: Adding an
Adjective Rating Scale. 4, 3 (2009), 10.
Frank Bentley, Chris Luvogt, Max Silverman, Rushani Wirasinghe, Brooke White,
and Danielle Lottrjdge. 2018. Understanding the long-term use of smart speaker
assistants. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous
Technologies 2, 3 (2018), 91.
Matthias Biniok. 2019. Project CIMON Journal, Stories about DLR’s technology
experiment CIMON.
Dan Bohus and Alexander I Rudnicky. 2005. LARRI: A language-based mainte-
nance and repair assistant. In Spoken multimodal human-computer dialogue in
mobile environments. Springer, 203–218.
Michael Braun, Anja Mainz, Ronee Chadowitz, Bastian Peging, and Florian
Alt. 2019. At Your Service: Designing Voice Assistant Personalities to Improve
Automotive User Interfaces. In Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association
for Computing Machinery, New York, NY, USA, Article 40, 11 pages. https:
Katharina Brauns, Anika Werner, Hanns-Christian Gunga, Martina A Maggioni,
David F Dinges, and Alexander Stahn. 2019. electrocortical evidence for Impaired
Aective Picture processing after Long-term immobilization. Scientic reports 9,
1 (2019), 1–9.
[9] John Brooke. 1996. SUS - A quick and dirty usability scale. (June 1996), 7.
Julia Cambre, Ying Liu, Rebecca E Taylor, and Chinmay Kulkarni. 2019. Vitro:
Designing a Voice Assistant for the Scientic Lab Workplace. In Proceedings of
the 2019 on Designing Interactive Systems Conference. 1531–1542.
Heloisa Candello, Claudio Pinhanez, and Flavio Figueiredo. 2017. Typefaces and
the perception of humanness in natural language chatbots. In Proceedings of the
2017 CHI Conference on Human Factors in Computing Systems. 3476–3487.
Fabio Catania. 2019. Conversational Technology and Aective Computing for
Cognitive Disability. In Proceedings of the 24th International Conference on Intelli-
gent User Interfaces: Companion (Marina del Ray, California) (IUI ’19). ACM, New
York, NY, USA, 153–154.
Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on
dialogue systems: Recent advances and new frontiers. Acm Sigkdd Explorations
Newsletter 19, 2 (2017), 25–35.
Saemi Choi and Kiyoharu Aizawa. 2019. Emotype: Expressing emotions by
changing typeface in mobile messenger texting. Multimedia Tools and Applications
78, 11 (2019), 14155–14172.
Leigh Clark, Nadia Pantidi, Orla Cooney, Philip Doyle, Diego Garaialde, Justin
Edwards, Brendan Spillane, Emer Gilmartin, Christine Murad, Cosmin Munteanu,
MUM 2021, December 5-8, 2021, Leuven, Belgium Freitas and Schülke, et al.
et al
2019. What makes a good conversation? challenges in designing truly
conversational agents. In Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems. 1–12.
Yvonne A Clearwater and Richard G Coss. 1991. Functional Esthetics to Enhance
Weil-Being in Isolated and Conned Settings. In From Antarctica to outer space.
Springer, 331–348.
Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz stud-
ies—why and how. Knowledge-based systems 6, 4 (1993), 258–266.
Jasper Feine, Ulrich Genewuch, Stefan Morana, and Alexander Maedche. 2019.
A Taxonomy of Social Cues for Conversational Agents. In International Journal
of Human-Computer Studies (International Journal of Human-Computer Studies
Volume 132). 138–161.
Jonathan Grudin and Richard Jacques. 2019. Chatbots, humbots, and the quest
for articial general intelligence. In Proceedings of the 2019 CHI Conference on
Human Factors in Computing Systems. 1–11.
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task
Load Index): Results of empirical and theoretical research. In Advances in psy-
chology. Vol. 52. Elsevier, 139–183.
Christiane Heinicke, Leszek Orzechowski, Rawel Abdullah, Maria von Einem,
and Marlies Arnhof. 2018. Updated Design Concepts of the Moon and Mars Base
Analog (MaMBA). In European Planetary Science Congress, Vol. 12.
HI-SEAS. 2020. Hawai’i Space Exploration Analog and Simulation. http://hi-
Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of
the SIGCHI conference on Human Factors in Computing Systems. ACM, 159–166.
Elaine M Huang, Elizabeth D Mynatt, and Jay P Trimble. 2006. Displays in
the wild: understanding the dynamics and evolution of a display ecology. In
International Conference on Pervasive Computing. Springer, 321–336.
Benjamin Kading and Jeremy Straub. 2015. Utilizing in-situ resources and 3D
printing structures for a manned Mars mission. Acta Astronautica 107 (2015),
Candace Kamm. 1995. User interfaces for voice applications. Proceedings of the
National Academy of Sciences 92, 22 (1995), 10031–10037.
Joseph’Josh’ Kaye, Joel Fischer, Jason Hong, Frank R Bentley, Cosmin Munteanu,
Alexis Hiniker, Janice Y Tsai, and Tawq Ammari. 2018. Panel: voice assistants,
UX design and research. In Extended Abstracts of the 2018 CHI Conference on
Human Factors in Computing Systems. ACM, panel01.
Julia Kiseleva, Kyle Williams, Jiepu Jiang, Ahmed Hassan Awadallah, Aidan C
Crook, Imed Zitouni, and Tasos Anastasakos. 2016. Understanding user satisfac-
tion with intelligent assistants. In Proceedings of the 2016 ACM on Conference on
Human Information Interaction and Retrieval. 121–130.
Lorenz Cuno Klopfenstein, Saverio Delpriori, Silvia Malatini, and Alessandro
Bogliolo. 2017. The rise of bots: A survey of conversational interfaces, patterns,
and paradigms. In Proceedings of the 2017 Conference on Designing Interactive
Systems. ACM, 555–565.
Dounia Lahoual and Myriam Frejus. 2019. When Users Assist the Voice Assistants:
From Supervision to Failure Resolution. In Extended Abstracts of the 2019 CHI
Conference on Human Factors in Computing Systems. ACM, CS08.
James A Landay, Nuria Oliver, and Junehwa Song. 2019. Conversational User
Interfaces and Interactions. IEEE Computer Architecture Letters 18, 02 (2019), 8–9.
Walter S. Lasecki, Rachel Wesley, Jerey Nichols, Anand Kulkarni, James F. Allen,
and Jerey P. Bigham. 2013. Chorus: A Crowd-powered Conversational Assistant.
In Proceedings of the 26th Annual ACM Symposium on User Interface Software and
Technology (St. Andrews, Scotland, United Kingdom) (UIST ’13). ACM, New York,
NY, USA, 151–162.
Larry Laudan. 1978. Progress and its problems: Towards a theory of scientic growth.
Vol. 282. Univ of California Press.
James R. Lewis. 1995. IBM computer usability satisfaction questionnaires:
Psychometric evaluation and instructions for use. International Journal of
Human-Computer Interaction 7, 1 (Jan. 1995), 57–78.
Christine Murad, Cosmin Munteanu, Leigh Clark, and Benjamin R Cowan. 2018.
Design guidelines for hands-free speech interaction. In Proceedings of the 20th
International Conference on Human-Computer Interaction with Mobile Devices and
Services Adjunct. 269–276.
Chelsea Myers, Anushay Furqan, Jessica Nebolsky, Karina Caro, and Jichen Zhu.
2018. Patterns for How Users Overcome Obstacles in Voice User Interfaces. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
(Montreal QC, Canada) (CHI ’18). ACM, New York, NY, USA, Article 6, 7 pages.
Chelsea M. Myers. 2019. Adaptive Suggestions to Increase Learnability for Voice
User Interfaces. In Proceedings of the 24th International Conference on Intelligent
User Interfaces: Companion (Marina del Ray, California) (IUI ’19). ACM, New York,
NY, USA, 159–160.
Youssef Nahas, Christiane Heinicke, and Johannes Schöning. 2019. MARVIN:
Identifying Design Requirements for an AI powered Conversational User Inter-
face for Extraterrestrial Space Habitats.
Cliord Ivar Nass and Scott Brave. 2005. Wired for speech: How voice activates
and advances the human-computer relationship. MI T press Cambridge, MA.
Antti Oulasvirta and Kasper Hornbæk. 2016. Hci research as problem-solving. In
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
Martin Porcheron, Joel E Fischer, Stuart Reeves, and Sarah Sharples. 2018. Voice
interfaces in everyday life. In proceedings of the 2018 CHI conference on human
factors in computing systems. ACM, 640.
Daniela Quiñones and Cristian Rusu. 2017. How to develop usability heuristics: A
systematic literature review. Computer Standards & Interfaces 53 (2017), 89–122.
Boele de Ed Raad and Marco Ed Perugini. 2002. Big ve factor assessment: Intro-
duction. Hogrefe & Huber Publishers.
Manny Rayner, Beth Ann Hockey, Nikos Chatzichrisas, Kim Farrell, and Jean-
Michel Renders. 2005. A voice enabled procedure browser for the International
Space Station. In Proceedings of the ACL Interactive Poster and Demonstration
Sessions. 29–32.
Stuart Reeves, Martin Porcheron, Joel E. Fischer, Heloisa Candello, Donald McMil-
lan, Moira McGregor, Robert J. Moore, Rein Sikveland, Alex S. Taylor, Julia
Velkovska, and Moustafa Zouinar. 2018. Voice-based Conversational UX Studies
and Design. In Extended Abstracts of the 2018 CHI Conference on Human Factors
in Computing Systems (Montreal QC, Canada) (CHI EA ’18). ACM, New York, NY,
USA, Article W38, 8 pages.
Steven Ross, Elizabeth Brownholtz, and Robert Armes. 2004. Voice user interface
principles for a conversational agent. In Proceedings of the 9th international
conference on Intelligent user interfaces. Citeseer, 364–365.
Nick Salamon, Jonathan M. Grimm, John M. Horack, and Elizabeth K. Newton.
2018. Application of Virtual Reality for Crew Mental Health in Extended-Duration
Space Missions. In 68th International Astronautical Congress (Ohio, USA) (IAC).
IAF, 25–29.
Stefan Schaer and Norbert Reithinger. 2019. Conversation is Multimodal: Thus
Conversational User Interfaces Should Be As Well. In Proceedings of the 1st
International Conference on Conversational User Interfaces (Dublin, Ireland) (CUI
’19). ACM, New York, NY, USA, Article 12, 3 pages.
Alex Sciuto, Arnita Saini, Jodi Forlizzi, and Jason I Hong. 2018. Hey Alexa,
What’s Up?: A mixed-methods studies of in-home conversational agent usage. In
Proceedings of the 2018 Designing Interactive Systems Conference. ACM, 857–868.
Ingo Siegert, Julia Krüger, Olga Egorow, Jannik Nietzold, Ralph Heinemann, and
Alicia Lotz. 2018. Voice assistant conversation corpus (VACC): A multi-scenario
dataset for addressee detection in human-computer-interaction using amazon’s
ALEXA. In Workshop on Language and Body in Real Life & Multimodal Corpora.
Miyazaki, Japan.
Jennifer Sills, Christiane Heinicke, Marcin Kaczmarzyk, Benjamin Tannert, Alek-
sander Wasniowski, Malgorzata Perycz, and Johannes Schöning. 2021. Disability
in space: Aim high. Science 372, 6548 (2021), 1271–1272.
James Simpson. 2019. How is Siri Dierent Than GUIs?. In Proceedings of the
24th International Conference on Intelligent User Interfaces: Companion (Marina
del Ray, California) (IUI ’19). ACM, New York, NY, USA, 145–146. https://doi.
Bernhard Suhm, Brad Myers, and Alex Waibel. 2001. Multimodal error correction
for speech user interfaces. ACM transactions on computer-human interaction
(TOCHI) 8, 1 (2001), 60–98.
Chiew Seng Sean Tan, Johannes Schöning, Kris Luyten, and Karin Coninx. 2014.
Investigating the eects of using biofeedback as visual stress indicator during
video-mediated collaboration. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems. 71–80.
Edmund R Thompson. 2007. Development and validation of an internationally
reliable short-form of the positive and negative aect schedule (PANAS). Journal
of cross-cultural psychology 38, 2 (2007), 227–242.
Igor’ Borisovich Ushakov, Member Boris Vladimirovich, Yuri Arkad’evich Bubeev,
Vadim Igorevich Gushin, Galina Yur’evna Vasil’eva, Alla Gennad’evna Vi-
nokhodova, and Dmitrii Mikhailovich Shved. 2014. Main ndings of psychophys-
iological studies in the Mars 500 experiment. Herald of the Russian Academy of
Sciences 84, 2 (01 Mar 2014), 106–114.
Peggy Wu, Tammy Ott, and Jacki Morie. 2016. ANSIBLE: Social Connectedness
through a Virtual World in an Isolated Mars Simulation Mission. In Proceedings
of the 2016 Virtual Reality International Conference (Laval, France) (VRIC ’16).
Association for Computing Machinery, New York, NY, USA, Article 28, 4 pages.
Jennifer Zamora. 2017. I’m Sorry, Dave, I’m Afraid I Can’t Do That: Chatbot
Perception and Expectations. In Proceedings of the 5th International Conference
on Human Agent Interaction. ACM, 253–260.
Conrad Zeidler, Gerrit Woeckner, Johannes Schöning, Vincent Vrakking, Paul
Zabel, Markus Dorn, Daniel Schubert, Birgit Steckelberg, and Josene Stakemann.
2021. Crew time and workload in the EDEN ISS greenhouse in Antarctica. Life
Sciences in Space Research (2021).
... To address this challenge, smartglasses were tested during a task as a hands-free interaction tool, with participants using voice commands and additional information displayed through the lenses for task-specific requirements [8]. In addition, various voice user interfaces (VUI) for extraterrestrial habitats have been tested to provide mission-specific assistance during scientific experiments [6]. ...
Full-text available
People with impairments are not able to participate in space missions. However, this is not because they cannot; instead, spacecraft have not been designed for them. Therefore, instead of simply excluding people with impairments, they should be included in the design process. This may also help astronauts suffering from fatigue or accidents. Indeed, several impairments can occur due to the effects of microgravity on the body and psychological factors during long space missions. In this paper, we describe the idea of including people with all types of impairments in the design process of interactive space systems to obtain, as a result, systems that can also be used by astronauts suffering from a temporary or situational impairment during a long space flight. To this end, we have described solutions for some types of impairments to ensure the use of interactive systems with permanent, temporary, or situational impairments. The benefits from the participation of people with impairments also bring the idea of inclusion in space a bit closer: supporting people with impairments in space by designing appropriate systems that make space accessible.
... Part of the MaMBA concept is a conversational user interface (CUI) that supports the crew in monitoring and controlling their environment; this CUI is integrated within the module and facilitates the crew's interaction with the life support system and performance of laboratory work (see e.g., Freitas et al., 2021). ...
Full-text available
The Moon and Mars Base Analog (MaMBA) is a concept for an extraterrestrial habitat developed at the Center of Applied Space Technology and Microgravity (ZARM) in Bremen, Germany. The long-term goal of the associated project is to create a technologically functioning prototype for a base on the Moon and on Mars. One key aspect of developing such a prototype base is the integration of a bioregenerative life support system (BLSS) and its testing under realistic conditions. A long-duration mission to Mars, in particular, will require BLSS with a reliability that can hardly be reached without extensive testing, starting well in advance of the mission. Standards exist for comparing the capabilities of various BLSS, which strongly focus on technological aspects. These, we argue, should be complemented with the use of facilities that enable investigations and optimization of BLSS prototypes with regard to their requirements on logistics, training, recovery from failure and contamination, and other constraints imposed when humans are in the loop. Such facilities, however, are lacking. The purpose of this paper is to present the MaMBA facility and its potential usages that may help close this gap. We describe how a BLSS (or parts of a BLSS) can be integrated into the current existing mock-up at the ZARM for relatively low-cost investigations of human factors affecting the BLSS. The MaMBA facility is available through collaborations as a test platform for characterizing, benchmarking, and testing BLSS under nominal and off-nominal conditions.
Full-text available
The goal of the EDEN ISS project is to research technologies for future greenhouses as a substantial part of planetary surface habitats. In this paper, we investigate crew time and workload needed to operate the space analogue EDEN ISS greenhouse on-site and remotely from the Mission Control Center. Within the almost three years of operation in Antarctica, different vegetable crops were cultivated, which yielded an edible biomass of 646 kg during the experiment phase 2018 and 2019. Operating in such a remote environment, analogue to future planetary missions, both greenhouse systems and remote support capabilities must be carefully developed and assessed to guarantee a reliable and efficient workflow. The investigation of crew time and workload is crucial to optimize processes within the operation of the greenhouse. For the Antarctic winter seasons, 2019 and 2020, as well as the summer season 2019/2020, the workload of the EDEN ISS greenhouse operators was assessed using the NASA Task Load Index. In addition, crew time was measured for the winter season 2019. The participants consisted of on-site operators, who worked inside the EDEN ISS greenhouse in Antarctica and the DLR remote support team, who worked in the Mission Control Center at the DLR Institute of Space Systems in Bremen (Germany). The crew time results show that crew time for the whole experiment phase 2019 required by the on-site operator team 2019 is approximately four times higher than the crew time of the corresponding remote support team without considering planning activities for the next mission. The total crew time for the experiment phase 2019 amounts to 694.5 CM-h or 6.31 CM-h/kg. With the measurements of the experiment phase 2019 it was possible to develop a methodology for crew time categorization for the remote support activities, which facilitates the analysis and increases the comparability of crew time values. In addition, the development of weekly and monthly crew time demand over the experiment phase is presented. The workload investigations indicate that the highest workload is perceived by the remote support team 2019 + 2020, followed by the summer maintenance team 2019/2020. The on-site operator team 2019 and onsite operator team 2020 showed the lowest values. The values presented in this paper indicate the need to minimize crew time as well as workload demands of the operators involved in the operation of future planetary surface greenhouses.
Full-text available
The neurobehavioral risks associated with spaceflight are not well understood. In particular, little attention has been paid on the role of resilience, social processes and emotion regulation during long-duration spaceflight. Bed rest is a well-established spaceflight analogue that combines the adaptations associated with physical inactivity and semi-isolation and confinement. We here investigated the effects of 30 days of 6 degrees head-down tilt bed rest on affective picture processing using event-related potentials (ERP) in healthy men. Compared to a control group, bed rest participants showed significantly decreased P300 and LPP amplitudes to pleasant and unpleasant stimuli, especially in centroparietal regions, after 30 days of bed rest. Source localization revealed a bilateral lower activity in the posterior cingulate gyrus, insula and precuneus in the bed rest group in both ERP time frames for emotional, but not neutral stimuli.
Full-text available
Conference Paper
In this workshop paper we report on our early work to design a con- versational interface for astronaut scientists in an extraterrestrial habitat (e.g. a habitat on Moon or Mars). At the workshop we will report on our initial design and first evaluations of our conversational user interface called MARVIN. Our goal with MARVIN is to support scientists on their missions and during their daily (scientific) routines within and outside the habitat. We are installing our interface in MaMBA. The MaMBA project aims to build a first functional extra- terrestrial habitat prototype.
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Full-text available
Conference Paper
This paper investigates personalized voice characters for in-car speech interfaces. In particular, we report on how we designed different personalities for voice assistants and compared them in a real world driving study. Voice assistants have become important for a wide range of use cases, yet current interfaces are using the same style of auditory response in every situation, despite varying user needs and personalities. To close this gap, we designed four assistant personalities (Friend, Admirer, Aunt, and Butler) and compared them to a baseline (Default) in a between-subject study in real traffic conditions. Our results show higher likability and trust for assistants that correctly match the user's personality while we observed lower likability, trust, satisfaction, and usefulness for incorrectly matched personalities, each in comparison with the Default character. We discuss design aspects for voice assistants in different automotive use cases.
ESA has announced their new “Parastronaut Feasibility Project”, promising to make every reasonable effoort to send astronauts with disability to space. However, the finne-print of their announcement precludes the possibility of a visionary, inspirational outcome a priori.
The four papers in this special section explore applications, systems, methodologies, and technologies that relate broad aspects to realize successful conversational user interfaces and interactions. Speech recognition has become a part of our daily lives. Today, we can enjoy a wide array of conversational user interfaces and speech-based interactive services, from speech-based personal assistants in our smartphones and smart speakers, e.g.,Amazon Echo, Alexa, Cortana, and Google Home, to chatbots in traditional messaging interfaces. These interfaces are becoming pervasive as reflected by the exponential growth in their adoption. Hence, the expectations are that computer based conversational systems will soon reach the level of human–human conversation.
Conference Paper
In this paper we try to provoke by teasing the question "if conversational user interfaces should be multimodal?". Of course they should! In decades of research in multimodal HCI excellent arguments can be found. We substantiate our perspective with an example showing how conversational interaction becomes more robust and efficient through the use of multimodality.
Conference Paper
This paper investigates whether voice assistants can play a useful role in the specialized work-life of the knowledge worker (in a biology lab). It is motivated both by promising advances in voice-input technology, and a long-standing vision in the community to augment scientific processes with voice-based agents. Through a reflection on our design process and a limited but fully functional prototype, Vitro, we find that scientists wanted a voice-enabled device that acted not a lab assistant, but lab equipment. Second, we discovered that such a device would need to be deeply embedded in the physical and social space in which it served scientists. Finally, we discovered that scientists preferred a device that supported their practice of "careful deviation" from protocols in their lab work. Through this research, we contribute implications for the design of voice-enabled systems in workplace settings.
Conference Paper
What began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress quickened with the spread of low-latency networking, then accelerated dramatically a few years ago. In 2016, task-focused chatbots became a centerpiece of machine intelligence, promising interfaces that are more engaging than robotic answering systems and that can accommodate our increasingly phone-based information needs. Hundreds of thousands were built. Creating successful non-trivial chatbots proved more difficult than anticipated. Some developers now design for human-chatbot (humbot) teams, with people handling difficult queries. This paper describes the conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence.