A Cognitive Assistant for Entry, Descent, and Landing
Samalis Santini De Le´
Department of Aerospace Engineering
Texas A&M University
College Station, TX 77843
Department of Aerospace Engineering
Texas A&M University
College Station, TX 77843
David W. Way
NASA Langley Research Center
Hampton, VA 23666
Abstract—Entry, Descent and Landing (EDL) architecture per-
formance and uncertainty analysis relies heavily on end-to-end
simulation given that EDL system veriﬁcation and validation is
limited in Earth environments. Overall system assessment and
success criteria evaluation are performed by employing Monte
Carlo dispersion analysis. These simulations produce large data
sets that are manually analyzed by the subject matter experts,
trying to identify correlations between parameters and assessing
sensitivity of ﬁgures of merit to simulation parameters. Such
analysis work is critical, given that it could lead, for example,
to the discovery of major ﬂaws in a design. While the subject
matter experts can leverage their knowledge and expertise with
past systems to identify issues and features of interest in the
current dataset, the next generation of EDL systems will make
use of new technologies to address the issue of landing larger
payloads, and may present unprecedented challenges that may
be missed by the human.
In this paper, we suggest integrating Daphne, a cognitive assis-
tant, into the process of EDL architecture analysis to support
EDL experts by identifying key factors that impact EDL system
metrics. Speciﬁcally, this paper describes the current capabil-
ities of Daphne as a platform for EDL architecture analysis
by means of a case study of a sample EDL architecture for
an ongoing NASA mission, Mars 2020. Given that the work
presented in this paper is in its early development, the paper
focuses on the description of the expert knowledge base and
historical database developed for the cognitive assistant, as well
as on describing how experts can use it to obtain information
relevant to their EDL analysis process by means of natural
language or web visual interactions, thus reducing the effort of
searching for relevant information from multiple sources.
TABLE O F CONTENTS
2. BACKGROUND .......................................2
3. DAPHNE .............................................4
4. ADAPTING DAPHNE FOR EDL ARCHITECTURE
ANALYSI S ...........................................4
5. DAPHNE/EDL ARCHITECTURE .....................6
6. CAS E STUDY .........................................8
7. CON CLU SI ONS ..................................... 11
ACKNOWLEDGMENTS ................................ 11
REFERENCES ......................................... 11
BIOGRAPHY .......................................... 12
Entry, Descent, and Landing (EDL) consists of a series of
events and maneuvers required to land a payload, or vehicle,
on a planet and it is one of the most challenging phases in an
interplanetary mission. On Mars, EDL becomes increasingly
challenging given that the Martian atmosphere is roughly one
hundred times less dense than Earth’s atmosphere. Hence,
EDL systems on Mars must decelerate from hypersonic to
subsonic speeds at low altitudes, reducing the time avail-
able for subsequent events in the sequence to occur .
Because replicating the Martian environment is unfeasible,
EDL architecture analysis requires analyzing an umbrella of
architectures with high ﬁdelity simulations to assess perfor-
mance, cost, schedule, and risk under uncertainty . In
addition to the limitations of Earth-based testing, Mars EDL
trajectories are highly coupled to major sources of uncertainty
that include, but are not limited to, vehicle aerodynamics;
launch window; and atmospheric conditions during day-of-
entry events. NASA uses the Program to Optimize Simulated
Trajectories (POST-2) to simulate different entry conditions
under many model parameterizations (e.g. gravity, planetary
geometry, atmospheric, aerodynamic, control system, guid-
ance, and navigation models). POST-2 uses Monte Carlo
dispersion analysis techniques to help users evaluate perfor-
mance, assess mission-level feasibility, identify off-nominal
behavior, and support system design trades, among other
POST-2 based Monte Carlo dispersion analyses are employed
as early as Pre-phase A of a mission’s lifecycle. During
this phase, engineers evaluate simple models for conceptual
design studies to identify feasible system concepts, evaluate
alternative architectures, and draft system-level requirements
. Due to the inherent simplicity of the models and system-
level deﬁnition of the vehicle used in these early stages
of a mission’s lifecycle, analysis of data produced in these
simulations is relatively simple. However, as a mission life-
cycle progresses up to day-of-entry events, system and model
complexity increase, and so does the size and complexity of
the EDL Monte Carlo simulations . For the Mars Science
Laboratory (MSL), simulations sampled hundreds of input
variables and produced thousands of scenarios .
The large datasets produced in simulations are manually
analyzed by the subject matter experts, who try to ﬁnd inter-
esting correlations and couplings between parameters, and to
assess the sensitivity of ﬁgures of merit to various simulation
parameters. This analysis work is important since it may lead
to the discovery of a major ﬂaw in a design, for example.
However, the current approach suffers from one important
limitation. Whereas the subject matter expert can leverage
her or his knowledge and expertise on past systems to identify
issues and features of interest in a dataset, the next generation
of EDL systems for landing heavier payloads on the Martian
surface may present unprecedented challenges .
Due to the inherent limitations of expert-based analysis, we
believe EDL architecture analysis can beneﬁt from compu-
tational advances to reduce analysis cycle time, minimize
architecture lifecycle costs, and achieve mission success. In
particular, we are interested in incorporating Intelligent Data
Understanding (IDU) technologies (e.g., machine learning)
into the architecture analysis process. However, one of the
limitations of most IDU technologies is poor interactivity
with the user. Many machine learning models and feature
extraction algorithms work essentially as black boxes, which
implies two things: (1) it is hard to interpret and thus trust
their outputs; (2) it is hard to incorporate expert knowledge
into their learning process. In addition, these technologies
often provide more information than the one the end-user
deems relevant or can absorb or make sense of. These
shortcomings can be at least partially overcome with ad-
vanced user-interaction capabilities that allow the subject
matter expert to be in the loop. In this paper, we discuss
one such technology, namely a cognitive assistant (CA) that
can support human-machine interaction speciﬁc to the EDL
architecture analysis domain. Speciﬁcally, the paper focuses
on the implementation of the knowledge sources into the CA
and the automation and user-interactivity aspects of the CA.
The long-term goal of this work is to advance the state of
the art of ofﬂine IDU technologies for architecture analysis
of EDL by incorporating an intelligent assistant that helps
the subject matter expert analyze complex architectures and
communicates critical issues. We want this system to be
able to extract information by means of data-driven and
expert-based knowledge discovery techniques. Ultimately,
we wish to enable mixed-initiative approaches in intelligent
data understanding. This paper describes our ﬁrst steps in
The remainder of the paper is structured as follows. Section
2 provides an overview of the EDL architecture analysis
process, its challenges, and the rationales for exploring IDU
technologies. Section 3 introduces Daphne, a cognitive assis-
tant developed for Earth-observing satellite architecting prob-
lems. Section 4 describes how Daphne has been adapted and
extended for EDL architecture analysis. Section 5 describes
the current Daphne/EDL architecture. Section 6 presents a
use-case scenario for utilizing Daphne for EDL architecture
analysis. Finally, Section 7 describes the current limitations
of the existing implementation and the plans for future work.
The EDL Architecture Analysis Process
High-level decisions made during the creation and deﬁnition
of a system architecture are critical given that they com-
mit most of the system’s lifecycle costs and they deﬁne
the system’s behavior, complexity and emergence properties
(e.g. robustness, scalability, ﬂexibility, reliability). However,
this task becomes increasingly complex for EDL. Due to
the inherent limitations of Earth-based testing, Mars EDL
architecture analysis for NASA missions have relied up to
the present day on computer simulations to gain insight into
a variety of complex entry problems. These simulations
are constructed from a library of deterministic models that
have been reﬁned throughout the years to support different
vehicle systems (e.g. Space Shuttle, Mars Pathﬁnder, MSL).
These include but are not limited to vehicle-speciﬁc models
(e.g. aerodynamics, control system, guidance and navigation
models); and planetary-speciﬁc environment models (e.g.
gravitational, planetary geometry, atmospheric models) .
Due to the deterministic nature of the models available in
POST-2, high ﬁdelity POST-2-based Monte Carlo analysis is
critical for supporting system design, integration and oper-
ations throughout a mission’s lifecycle. Simulation results
help identify areas of risk associated with certain mission
phases that result from randomly varying entry conditions
(e.g. entry interface, atmospheric conditions) and varying
vehicle conﬁgurations (e.g. lift-to-drag ratio, entry ﬂight
path angle) and help quantify the robustness of a given EDL
Figure 1 shows an overview of the EDL-domain architecture
Monte Carlo analysis process every time a simulation is
conducted. Simulation results contain 8001 cases. Each
case is one vehicle conﬁguration that results from hundreds
of randomly generated model parameters and contains thou-
sands of output parameters. Examining individually values
and statistics of all thousands of output parameters is time-
consuming and prone to failure of identifying all relevant fea-
tures. Consequently, NASA uses a Scorecard to summarize
the simulation outputs from a particular simulation for com-
parison against project performance metrics. The “scorecard”
is a type of summary report that describes mission-speciﬁc
system performance metrics, the main simulation results (e.g.
percentiles, means), threshold values, and whether the results
satisfy the system requirements. EDL teams can examine
the scorecard and identify the metric requirements that are
not being satisﬁed and explore speciﬁc cases that might be
contributing to a particular system behavior. Simulation ex-
perts also examine the packages of plots and identify potential
outliers. For example, interesting cases to look at would be
points that fall out of the landing ellipse. This second step
however, often requires examining hundreds of plots, and
potentially hundreds of statistics of individual variables in
an attempt to identify all driving features. For the selected
cases (such as ﬂagged, or out of spec), experts often plot
the trajectory of these in an attempt to explain the system’s
During this process there are several common questions that
arise. Often these questions concern identifying the source
or sources of a particular observed behavior or why case Y
behaves differently than case Z, for example. However, to
answer these questions, experts must navigate through large
datasets with the objective of identifying potential features
of interest and commonalities between cases. However, this
task is very extensive and time consuming. More specif-
ically, all of the tasks enclosed by a red rectangle require
that that experts leverage their knowledge and expertise to
manually identify features of interest and critical parameters
in a dataset. There is no prescription for the analysis of such
complex simulation results and unfortunately, this task often
relies on the team’s expertise of the system under study .
This research seeks to help experts answer these questions.
Up to the present day there is no prescription for the analysis
of such complex simulation results and unfortunately, this
task often relies on the team’s expertise of the system under
study . For fully integrated six degrees-of-freedom vehicle
Figure 1.EDL Architecture veriﬁcation and validation
simulations in other domains of applications, the procedure is
similar to the one described. To examine simulations experts
typically: examine the statistics of simulation outputs (e.g.
peak deceleration and altitude at which peak deceleration
occurs); attempt to identify sensitivities of outputs to inputs;
and identify cases that fail to satisfy system requirements ,
. For example, percentiles in statistics of key perfor-
mance parameters are looked at to verify that X-percent of
all the cases satisfy the system requirements (e.g., timeline
margin, fuel consumption). Identifying sensitivities on the
other hand, is often achieved by means of scatter plots of the
output variables against individual input parameter distribu-
tions. This process would have to be repeated for each input-
to-output relationship the expert is interested in.
The most common technique employed for sensitivity anal-
ysis in the EDL domain is the “one-at-a time” approach. In
other words, sensitivities are calculated by varying a single
input variable (or in this case, a set of related variables) in
a model while maintaining all other inputs of all models at
their nominal values. For example, to assess the sensitivity
of aerodynamics in the POST-2 trajectory analysis, a Monte
Carlo simulation is executed with uncertainty in aerodynam-
ics model parameters while maintaining all other models
(e.g., gravitation, entry conditions) at their nominal values.
This process is repeated for all models, and sensitivities to
each model are compared by means of scatter plots and
3σdispersion analysis. Although this approach is straight-
forward, it fails to identify dependencies and interactions
between input variables.
Intelligent Data Understanding Technologies
Up to the present day, IDU technologies have been commonly
employed to support mission operations, but their use in
performance analysis during mission development has not
been explored. In particular, these technologies have been
employed by NASA for on-board data processing and anal-
ysis capabilities. Some of the advantages of IDU for these
applications include automated detection of events (e.g., an
anomaly in a telemetry sensor, or the presence of a ﬁre in an
image of a forest), and automated on-board intelligent actions
responding to those events (e.g., switching the spacecraft to
safe mode, or slewing the spacecraft to observe the ﬁre for
a longer time). The state of the art for these technologies is
set by NASA’s Space Cube 2.0 and NASA’s Earth Observing
1 (EO-1) spacecraft. The Space Cube 2.0 possesses data
reduction and on-board processing capabilities that provide
the system with ﬁrst-responder real-time awareness .
Similarly, NASA’s EO-1 conducts on-board planning and
scheduling that enables the system to detect science targets
and change the plan accordingly .
Another advance in automated intelligent data understanding
technologies that is outside of the space domain but perhaps
more relevant to our efforts is the Automated Statistician
(http://www.automaticstatistician.com). This tool seeks to
automate all aspects of understanding and explaining data.
The Statistician’s focus is to build models from the data
provided, generate explanations and deliver the knowledge
extracted to the user in the form of a report in natural
One drawback of such approaches is that the end user has to
navigate through a large amount of automatically generated
information to ﬁnd aspects of the data she/he is interested
in, potentially resulting in information overload . We
argue that tools such as the Automated Statistician can beneﬁt
from more interactivity with the user, for example to allow
him or her to interactively specify the type of information,
family of models, or region of interest in a dataset for further
analysis. The interaction between the human and the tool can
thus be enhanced by means of cognitive assistants with ad-
vanced dialogue and interactive capabilities. Others have also
emphasized the potential of human-in-the-loop data mining
tools to improve the process of knowledge extraction .
Cognitive assistants have been explored as a viable platform
to provide decision-making support to experts in the face of
uncertainty. In addition, they provide one of the capabilities
IDU technologies lack: advanced user interactivity. Unlike
other artiﬁcial intelligence tools and applications, CAs can
obtain domain-speciﬁc knowledge in ways that follow a
teacher-apprentice approach. Hence, a CA can learn
from“rules of thumb” of dos and don’ts for a domain-speciﬁc
application. However, they can still exploit AI and data analy-
sis techniques that conform the essence of IDU technologies,
to quantify the probabilities and states of a particular decision
. A CA can be useful for identifying features of interest in
a design; analyzing and communicating the ﬁndings to team
members; providing historical or contextual information; and
more generally reducing cognitive load on the team members
. In the context of EDL, CAs can help experts identify
anomalies, features of interest, and extract knowledge that
could potentially not be attainable by manually examining
a simulation data set of the architecture under evaluation,
for example. To exploit the interactivity features CAs are
characterized for, EDL teams can specify to the assistant
aspects of the data they are interested in and what type of
analysis to conduct.
At the moment, CAs in the aerospace domain have been
mostly created with the intent of providing situational aware-
ness and subsequent operational decisions making tasks. For
example, COGAS, a CA, supports NAVY ships in air target
identiﬁcation. COGAS makes use of sensor information and
a-priori expert knowledge contained in their models (e.g.,
operator activities in their work domain) to process acquired
data, identify and analyze the system’s state, establish system
goals, and activate the appropriate procedure . Other CAs
within this application domain include the Crewed Assistant
Military Aircraft (CAMA) and the Digital Copilot , .
Along these lines, interest has arisen in integrating CAs for
supporting astronaut crew during missions beyond Low Earth
Orbit (LEO), especially in off-nominal conditions when there
is a long communication delay between Earth and the space
vehicle. As for the previous examples presented, a CA for
space crew support- with some level of automation- should
have the capabilities to diagnose a problem, provide recom-
mendations to the the crew during emergency situations based
on previous knowledge, evaluate the diagnoses, perform risk
trade-offs, and evaluate and generate procedures .
Daphne is a CA that specializes in system architecting prob-
lems for Earth observing (EO) satellite systems. The main
goal of Daphne is to help experts in the architecture analysis
process by providing relevant information, advice, and feed-
back that address strengths and weaknesses of a particular
design . These capabilities help minimize the cognitive
load on experts by reducing the need to manually search
through multiple sources of information.
Figure 2 shows Daphnes architecture for EO satellite archi-
tecture analysis, consisting of four layers. The ﬁrst layer, the
front end, serves as a platform for the user to interact and
communicate with Daphne. Requests made either in natural
language or through the web visual interface are passed on to
the Daphne’s brain, the server, where the request is forwarded
to the respective back-end modules. After the information
requested is retrieved from the back-ends, the Daphne server
returns the response to the user through the web interface.
Questions or requests made to Daphne are processed through
HTTP or Websocket requests. Questions made in natural
language form are directed to the Sentence Processor, which
makes use of a Convolutional Neural Network (CNN) to
classify the question, and directed to the “skill” required to
answer that particular type of question. For example, a ques-
tion such as “what missions were launched in 2018?” would
be classiﬁed as a question for the Historian skill. Each “skill”,
or role, makes use of multiple algorithms and knowledge
extraction techniques at the back-end that extract knowledge
from the data sources available. Knowledge available in
Daphne is stored in three primary data sources: a historical
database, an expert-knowledge base, and the current dataset.
Daphne Skills—Daphne has four primary capabilities: 1) the
Analyst, 2) the Critic, 3) the Historian, and 4) the Explorer.
The analyst is in charge of answering questions about the
design under analysis. The critic skill takes the proposed
design and provides feedback to the users about the strengths
and weaknesses of the design. In addition, it provides
suggestions on how to improve the design. Critiques come
from the information available in the Expert Knowledge Base
in the form of rules of thumb (rule-driven), from a historical
database of past missions (legacy-driven), or from the cur-
rent dataset (data-driven). The Historian provides historical
information on previous missions and can be used during the
design process to check whether selected parameters and the
design being evaluated are similar to those of past designs.
Example questions include “What is the most common orbit
for ice cloud detection?”. Finally, the Explorer executes
a genetic algorithm in the background. As Daphne ﬁnds
solutions that improve the current Pareto front, Daphne asks
the user if she/he wants these solutions in the current dataset.
4. ADAPTING DAPHNE FOR EDL
This section describes how Daphne’s capabilities have been
extended to the EDL architecture analysis domain and how
Daphne can be of use for the EDL architecture analysis
process. Deﬁning a complex system such as a satellite
or planetary mission requires end-to-end simulation models
to simulate the system’s behavior and interactions with its
surrounding to a sufﬁcient level of detail that enables experts
to quantify the system’s performance. However, given that
the performance metrics of interest in an EDL architecture
analysis problem (e.g. altitude performance, peak entry
environments) and nature of the simulation (6-DOF trajec-
tory) are distinct from those in the Earth observing satellite
architecture problem, it was necessary to incorporate EDL
data/knowledge sources. These include:
1. A historical database containing system performance met-
rics of past EDL missions.
2. An expert knowledge base that contains rules of thumb
and analysis criteria for EDL.
The EDL historical database was created to provide subject
matter experts with information about previous EDL mis-
sions. However, unlike for Earth observing satellite missions,
there is no online database of previous EDL mars missions to
support the coordination of EDL architecture analysis for fu-
ture planetary missions. Furthermore, creating a database in
the EDL domain is challenging due to the number of variables
involved in these complex multibody vehicle systems. Conse-
quently, we established two requirements for the implemen-
tation of the EDL database. First, the database shall contain
descriptive information about mechanisms employed during
the EDL sequence of each mission. Some of these are, for
example, type of entry (direct/orbit), entry lift control (center-
of-mass offset/no offset), entry guidance (unguided/guided),
and descent attitude control (RCS roll rate/none), among
others. Such information can provide experts with contextual
data when examining metrics of different architectures. And
second, the database should contain information that is shared
across EDL architectures. This consideration is driven by the
fact that limited information is available from past missions
and that comparison across missions can only be achieved
if different vehicle systems can be described using com-
mon performance metrics. For example, although different
missions have employed different mechanisms for entry lift
control, common performance metrics include peak deceler-
ation and peak heat rate, among others. Thus, the resulting
database contains system performance metric drivers that can
be traced to level-1 requirements shared across missions.
These EDL system performance drivers are captured by six
overarching themes depicted on Table 1 and are described in
Ref.: altitude performance, range to target, time on radar,
peak entry environments, wind sensitivity, and propellant use.
Expert Knowledge Base
The Scorecard discussed in Section 2 was used as the expert
knowledge base for the EDL skill given that it provides a
standardized knowledge repository that is shared among all
EDL groups. The scorecard provides a dictionary between
natural language-form descriptions and mathematical models
Daphne can make use of for analysis and calculations. For
example, the metric described as fuel consumption contains
a ﬁxed number of entries, each containing the ﬂag and out
of spec values, units, description of the metric, the POST-
2 results, and the calculation required to obtain the metric
Figure 2.Daphne Architecture for EO.
value. In addition, it contains thresholds and conditions that
can be translated into rules by employing “if-then” statements
and quickly identify mission-speciﬁc requirements that are
not satisﬁed - or close.
Tailoring Daphne to EDL architecture analysis needs
Types of Information of Interest in the Context of EDL—
Establishing what the types of information subject matter
experts deem relevant during the EDL architecture analysis
process was done by two lines of investigation: literature
reviews and discussions with one of the authors, an EDL
expert. Knowledge and information extracted during this
process were used to develop a set of questions/commands
for Daphne should respond in order to assist the experts.
A survey of Monte Carlo Techniques employed for the analy-
sis of Monte Carlo simulation outputs discussed in Section
2 suggested that subject matter experts are predisposed to
acquire a sense of the statistic of variables of interest and
their sensitivities . Experts will then be inclined to
identify stressing cases (e.g. ﬂags and out of spec) for
further investigation. In the process of identifying features of
interest, the analysis is driven by comparing stressing cases
to nominal cases in an attempt to identify commonalities and
differences between them. During this process, experts make
use of visual aids (e.g. variable plots and statistical plots)
and conduct extensive search of the dataset for identifying
distinctive features that explain the system’s behavior.
From the frequent discussions held with an expert in EDL
end-to-end simulation analysis, we generated a set of prelimi-
nary question types (QT) and actions (AC) that emerge during
the analysis process discussed:
•QT: What are the statistics (e.g. mean, min, max, 99%-tile)
of parameter X ?
•QT: Is parameter X correlated with parameter Y ?
•QT: How is the result from mission/simulation A different
from mission/simulation B ?
•QT: Why is case X failing ?
•QT: What do cases A to C have in common ?
•AC: Find the value of parameter X for a mis-
•AC: Plot statistics (e.g. histogram, quad-quad plot, CDF).
•AC: Plot parameter X vs. parameter Y.
•AC: Identify a stressing case.
•AC: Plot the evolution of a parameter over time, possibly
Use Cases of Daphne— A survey of the EDL architecture
analysis process helped identify two use cases in which
Daphne can be of aid to experts: 1) by reducing the cogni-
tive load and the manual labor of having to search through
multiple sources of information and 2) by providing analysis
and insights on a particular architecture.
The ﬁrst item mentioned is relevant for individual and col-
lective analysis of EDL architectures. For example, due to
the human-like nature of CAs, Daphne could be incorporated
into a collective setting where experts discuss the results of
metrics from multiple simulations (e.g. different landing
sites) and assess system performance of each. At the mo-
ment, this task requires that experts search for the relevant
simulation data set and extract the values of the metric(s)
they are interested in. In some cases, additional calculations
are required. This process is repeated for each simulation.
Hence, we envision that Daphne could do this for the user. By
means of natural language, the subject matter expert can ask
Daphne for the results she/he is interested in without going
through the manual labor of searching and loading each data
set and calculating the metric of interest.
Table 1. EDL System Performance Metrics 
System Performance Theme Description Performance drivers (examples)
Good altitude performance enables
the system to land on higher elevation
Parachute deploy altitude,
entry velocity, atmospheric
density, arrival geometry,
Range to target Distance to science target.
Entry guidance, initial navigation
errors, range error,
parachute deploy altitude
Time on radar
Direct measure of timeline margin:
time until conditions for radar acquisition
and backshell separation are met.
Heat shield jettison, altitude,
spacecraft off-nadir angle
Peak entry environments
Critical peak conditions include:
peak deceleration, dynamic
pressure, heat rate, heat load,
Entry velocity, mass, atmospheric
density, entry angle of attack
Wind sensitivity Presence of tail-winds result in
parachute deploy altitude loss
Mach number estimation error:
velocity error, speed of sound
error, wind error
On-board propellant is a ﬁxed quantity
and must be closely
Velocity losses: gravity loss, cosine loss.
Along these lines, we envision that Daphne can analyze and
identify critical information in a simulation and communicate
the ﬁndings to the user. For example, Daphne can identify
critical parameters that drive a particular architecture’s behav-
ior as well as extract and compare features of interest across
missions or simulations.
5. DAPHNE/EDL ARCHITECTURE
Figure 3 presents the current implementation of Daphne and
the respective front-ends, back-ends, and data sources for the
EDL role. Operations of Daphne remain unchanged from
those discussed in Section 4. In the current implementation of
Daphne, all EDL-related requests made in the user interface
are directed to the Daphne server through HTTP/Websockets,
a bi-directional line of communication established between
the client and the Daphne brain. EDL-related queries or
commands are processed by the Sentence Processor’s CNN
and classiﬁed as an EDL role. Requests are then processed
by the EDL query builder. The Query Builder uses JSON ﬁle
templates to identify the type of query, extract the features of
interest in the query (e.g., mission name, parameter name),
and direct the query to the respective executable functions
and data sources used to generate the response. A response is
then created and directed back to the client.
A stand-alone visual interface was created for EDL-type of
queries. Largely based on the interface created for Daphne for
EO missions, the visual interface contains a panel where the
user can write a question or request to Daphne –the user can
also just speak. The other two panels are the plot and answer
panels. The plot panel provides a visualization in response to
the data users request. This plot is interactive, so users can
hover over the data and obtain the value of a speciﬁc point.
At the moment, Daphne can present two types of plots. The
ﬁrst plot shown in Figure 4 is a statistical plot. This plot is
generated when a user asks about the statistics of a particular
metric from a speciﬁc simulation ﬁle. The plot contains a
histogram of the relevant data and its cumulative distribution
The second plot is a scatter plot that is automatically gener-
ated by Daphne when the user requests a plot of two metrics
from a particular simulation ﬁle. The user can hover over the
scatter plot to see the exact values in the x-axis and y-axis as
well as the case number of that data point. Knowing the case
number of data points is useful for identifying stressing cases.
Finally, the panel below the plots displays responses in
natural language form. Use cases of Daphne for EDL are
described in more detail in the following section by means of
a case study.
The EDL assistant accepts questions in natural language
form. Figure 5 presents the question classiﬁcation process.
As depicted in the ﬁgure, requests from the user are classiﬁed
into either commands or questions. In this case, we assume
that the user asks Daphne “What was the entry velocity for
MSL?”. Once the request is classiﬁed as a question, Daphne
proceeds to classify the request by type (e.g., EDL). This
task is achieved by means of a Convolutional Neural Network
(CNN). The existing algorithm in Daphne was retrained to be
able to classify questions regarding EDL. Whenever this role
is active, Daphne executes a MATLAB engine that is used
to support the skill. Reference  contains a more detailed
description of the CNN model implemented in Daphne.
At the moment, all of the EDL questions are contained within
the EDL role. However, the EDL capability is not meant to be
a role like the Analyst or the Critic. Rather, the intention is to
be able to use the existing skills for EDL simulation datasets,
by simply modifying the sources of data. However, for this
ﬁrst prototype, all EDL-related queries are addressed by the
Following question classiﬁcation by type, Daphne searches
for the information requested in the query. JSON ﬁle
templates available in Daphne specify the name/value pairs
required to respond to a particular query and are used to
Figure 3.Current Daphne Architecture.
Figure 4.Daphne interface for EDL architecture analysis.
Figure 5.Daphne data extraction process.
search for the features requested. For the query “What
was the entry velocity for MSL?”, we want to extract two
features: mission name and parameter. Feature extractors
match the sentences to lists of known values for the requested
information. Daphnes implementation of the statistical model
provided by Sellers et al., algorithm accounts for mistakes
(e.g. typos) in the users request . In this case, features are
extracted from the historical database in the query section of
the template. Finally, after features are extracted, results are
embedded into the template response. The response is then
returned to the user at the front end through voice or through
the visual response template.
Although in the current implementation, all EDL analysis
capabilities are contained in a single role labeled EDL,
Daphne for EDL architecture analysis is currently a historian
and analyst of sorts. Furthermore, the current capabilities
help automate the task of manually searching for information
concerning previous EDL architectures as well as information
from EDL simulation datasets. Daphne can handle questions
regarding historical information, whether a particular simula-
tion satisﬁes performance requirements and can provide basic
statistics on EDL parameters and metrics.
At the moment, Daphne for EDL does not make use of any
machine learning algorithms for evaluating an EDL architec-
ture. For EO, Daphne makes use of an architecture evaluation
algorithm, data mining capabilities, a genetic algorithm and
a clustering algorithm. As a part of future work, we plan on
incorporating capabilities for architecture evaluation and data
mining for obtaining insight on features of interest in an EDL
The data sources described in the adaptation of Daphne
EDL (historical database and expert knowledge base) were
incorporated into the current Daphne architecture. Any
historical information is directed to the historical database,
whereas commands such as ”calculate the landing ellipse”
are directed to the EDL scorecard template to extract the
equations required to analyze the entry problem at hand.
The MATLAB engine connected to Daphne is in charge of
performing the calculation requested and directing the result
back to Daphne. Furthermore, if the user is interested in
examining the scorecard for a particular simulation, upon
request, Daphne can create one using the template available.
With the scorecard stored in Daphne’s working memory, the
user can request information about simulation results and
whether they satisfy system requirements. For example, users
can ask what metrics in the scorecard are ﬂagged and Daphne
returns a list of the ﬂagged metrics and the simulation results
compared against the requirements.
6. CASE ST UDY
Context and Goals for the Case Study
Landing site selection for interplanetary missions is largely
driven by science objectives . However, it is also con-
strained by the system’s ability to land safely on the target
region. For the upcoming Mars 2020 mission, a team of
scientists narrowed down the list of candidate landing sites to
three: Columbia Hills Northeast Syrtise, and Jezero Crater.
Engineering and science operations were the primary criteria
for this selection. A fourth site was added to the list later:
Midway (MDW). This landing site lies between Northeast
Syrtise (NES) and the Jezero (JEZ) crater and provides an
opportunity to collect high-content science data from both
In the case study, we will present the current capabilities of
Daphne for EDL architecture analysis using the simulation
for the Midway landing site. The work discussed in this
section represents a ﬁrst prototype of the CA and emphasizes
the automation capabilities of Daphne, as opposed to those
related to generating truly new insights. This case study will
also support the task of identifying capabilities that need to
be added to Daphne/EDL in the future.
Use Case Scenario
In the scenario presented, we assume that the expert is inter-
ested in examining the outputs of the Mars 2020 architecture
with Midway as its target landing site. This mission inherited
the MSL EDL architecture segments (Figure 6) of: guided
entry, parachute descent, powered descent, skycrane maneu-
ver, and ﬂyaway. Furthermore, Mars 2020 has incorporated
terrain relative navigation (TRN) as a new EDL technology
that provides capabilities for hazard avoidance and landing
For the sake of demonstrating some of the automation tasks
of Daphne, the dialogue presented assumes that the end user
is an expert in the ﬁeld of EDL. Hence, the dialogue presented
will follow the process a Simulator would likely follow.
Figure 7 illustrates a sample the dialogue between Daphne
and the Simulator.
In this scenario, the user interested in examining outputs of
a simulation. The Scorecard is used to rapidly examine all
relevant metrics for the mission. The EDL expert requests
Daphne to load the simulation ﬁle and generate a Scorecard
Figure 6.Mars 2020 EDL architecture.
for this landing site. Daphne returns to the front-end that the
simulation ﬁle has been loaded and that the Scorecard has
been generated. The scorecard generation task is achieved by
executing the corresponding scripts and templates the current
EDL teams use to create the scorecard.
With the scorecard stored in Daphne’s context, the expert
then asks Daphne what metrics in the Scorecard are ﬂagged.
Daphne returns a list of metrics that are ﬂagged along with
the values that require attention. Along the same lines, the
user can request to view what metrics are out of spec –i.e., do
not satisfy system requirements.
One way to further examine these metrics of interest, such as
the ﬂagged metric “peak inﬂation axial load”, is by means of
visualization. For example, the expert may request Daphne
to plot parachute full inﬂation load as a function of time for
full inﬂation. Daphne returns a scatter plot where the user is
given the option to hover over the data points to visualize
the detailed values and the case number depicted in that
data point. In this example, when the user hovers, Daphne
displays:“x:794.33, y: 252,037.07, case : 2789”. Obtaining
such value is useful for further examining speciﬁc cases of
To obtain additional information about the metric of interest
at the moment (parachute maximum inﬂation load), the user
then requests Daphne the statistics of the metric. Daphne re-
turns to the front end an interactive histogram and cumulative
distribution function for the user along with detailed statistics
(e.g. mean, min, max).
As stated in Section 2, simulation results are often compared
to other simulations as well as historical information on
relevant metrics. In this case, we assume the expert is
interested in comparing simulation results to those for the
NES landing site. Functions incorporated in Daphne allow
for experts to ask about the results of metrics from other
simulations. For example, the user can ask Daphne “for
the NES simulation ﬁle, calculate peak inﬂation axial load.”
This way the user does not have to go through the process
of generating a scorecard and search for the relevant metric.
Daphne responds to the user with “the peak inﬂation axial
load is = 62.42 lbs.” If the scorecard is already available,
the user also has the option of asking for the values of
other metrics for the scorecard under examination: “from
the scorecard name, what are the POST results for parachute
deploy Mach number ?” Daphne responds with the values
available. In some cases, more than one result is available.
As in this example, results for parachute deploy Mach number
are expressed in percentiles with multiple values, and all are
delivered to the end user.
Assuming that the expert is interested in comparing simula-
tion results of the parachute deploy Mach number to previous
missions, the user can ask so to Daphne. When the expert
asks “for MSL, what was the parachute deploy Mach number
?”, Daphne forwards the request to the EDL role who directs
the query to the historical database.
Opportunities for Daphne EDL
Based on the use cases presented, there are several oppor-
tunities to improve Daphne’s capabilities. As observed in
Figure 7, Daphne can extract results from simulation datasets,
whether its through Matlab or through the scorecard, and
identify metrics that are ﬂagged or out of spec. The scatter
plot provides a visualization of the results and the user can
hover over outliers to obtain case numbers. At the moment,
Daphne only provides the case number. However, Daphne
can potentially record the case numbers a user selects for
further examination. For example, Daphne could load the
trajectories for the cases speciﬁed. The user could then visu-
alize these trajectories in an attempt to identify any abnormal
behavior. To further exploit the data available, Daphne can
aide the user in this task by making use of sensitivity analysis
and data mining techniques to provide insight on the driving
features for a particular system behavior.
Extending sensitivity analysis beyond the one at-a-time ap-
Figure 7.Daphne use case: sample dialogue between a Simulator and the Daphne CA.
Figure 8.Daphne use case (continued).
proach is particularly interesting given that other methods can
take into account the simultaneous variation of inputs. Global
variance-based sensitivity methods such as the Sobol-Method
are especially promising given that they allow full exploration
of the input space and account for variable interactions .
Explanation abilities of Daphne/EDL can be incorporated
using statistical learning techniques already available for
Daphne EOs. Daphne/EO uses Association Rule Mining
(ARM) techniques for knowledge extraction. As the name
suggests, ARM techniques ﬁnds statistical associations be-
tween elements in a dataset and represents them in the form
of logical rules F→G, which are interpreted as “whenever
Fis true, then Gis also likely to be true”, where F, G
are any binary features, such as “entry mass being greater
than 1000kg”, or “having an out of spec on the amount of
fuel remaining at DSI”. The quality of these rules can be
assessed by means of importance measures such as support
and conﬁdence. Support refers to the frequency of the rules
applicability to a given data set. Conﬁdence determines the
frequency of the items in the dataset containing Fthat also
contain G. In other words, how good a predictor Fis of G.
Given that knowledge extraction by means of ARM comes
in the form of rules, knowledge is easily comprehensible for
the end-user. Data mining of one or multiple datasets can
identify patterns, and the cognitive assistant can deliver the
knowledge extracted in the form of several such rules.
Finally, as seen in the second section of the use case, Daphne
can extract information across different simulations. To
enhance the analysis capabilities of Daphne, we envision the
system possess the ability to compare metric values across
these simulations. For example, “how different is peak
inﬂation load in the MDW landing site vs the one obtained for
NES.” Example expected answers include “the value of peak
inﬂation load for NES is signiﬁcantly greater than that for
MDW” or “the value of NES is not statistically signiﬁcantly
different from the one obtained in MDW”.
7. CONC LUSI ONS
In this paper, we introduced a CA for EDL architecture
analysis based on Daphne, an existing CA that specializes
in supporting the design of satellite constellations. The main
objective is to use Daphne as a platform for IDU technologies
for performance analysis of the next generation of EDL
architectures. Expanding Daphne’s capabilities will serve to
aid EDL design teams to rapidly evaluate EDL performance
metrics, identify high information content data and improve
the uncertainty characteristics of inferences made by making
use of multiple sources of knowledge. With the current
implementation of Daphne, we have showed by means of a
case study that Daphne can handle EDL-domain data and can
reduce cognitive load of EDL experts by providing relevant
information about the data under examination in a timely
manner. Daphne automates many of the steps required to
obtain such information. For example, we demonstrated that
Daphne can load simulation data sets for the user, provide
values, statistics, and calculations of variables of interest.
In addition, Daphne can provide visualizations of the data.
Daphne can also generate a scorecard upon request and can
search for historical data on past EDL architectures through
the EDL database.
The ability to communicate through natural language by
means of text or verbal requests makes Daphne a good
candidate for being employed in a team setting. For example,
experts discussing multiple simulations can turn to Daphne
for searching any information that is not readily available
in their results packet. However, because Daphne for EDL
architecture analysis is still in its early stages, the information
extracted does not necessarily provide any truly new insights
on the effect of variables and/or combinations of variables
in the metric values, for example. In other words, Daphne
tells us what we already know or could easily know, but
with less work to obtain this information. The next steps
are to incorporate analysis capabilities to Daphne for EDL
architecture analysis and try to obtain new information that
“we really don’t know” Future work aims to address this task
by incorporating sensitivity analysis and machine learning
The authors would like to thank the NASA Science and
Technology Research Fellowship (NSTRF) for funding this
work. The authors would also like to thank Antoni Vir´
the main developer of Daphne, for his support with adapting
Daphne to the EDL domain.
 R. Braun, M., Manning, “Mars Exploration Entry, De-
scent, and Landing Challenges,” Journal of Spacecraft
and Rockets, vol. 44, no. 2, pp. 310–323, 2007.
 S. A. Striepe, D. W. Way, A. M. Dwyer, and J. Balaram,
“Mars Science Laboratory Simulations for Entry, De-
scent, and Landing,” Journal of Spacecraft and Rockets,
 D. W. Way, J. L. Davis, and J. D. Shidner, “Assessment
of the Mars Science Laboratory entry, descent, and
landing simulation,” in Advances in the Astronautical
Sciences, vol. 148, 2013, pp. 563–581.
 S. J. Kapurch, “NASA Systems Engineering Hand-
book,” NASA Special Publication, 2007.
 M. a. K. Lockwood, R. W. Powell, K. Sutton, R. K.
Prabhu, C. A. Graves, C. D. Epp, and G. L. Carman,
“Entry conﬁgurations and performance comparisons for
the mars smart lander,” Journal of spacecraft and rock-
ets, vol. 43, no. 2, p. 258, 2006.
 G. Wells, J. Laﬂeur, A. Verges, K. Manyapu, J. Chris-
tian, C. Lewis, and R. Braun, “Entry, descent, and land-
ing challenges of human mars exploration,” in Advances
in the Astronautical Sciences, 2006.
 Z. Ghahramani, “Probabilistic machine learning and
artiﬁcial intelligence,” 2015.
 C. I. Restrepo and J. E. Hurtado, “Tool for rapid analysis
of monte carlo simulations,” Journal of Spacecraft and
Rockets, vol. 51, no. 5, pp. 1564–1575, 2014.
 E. Baumann, C. Bahm, B. Strovers, R. Beck, and
M. Richard, “The x-43a six degree of freedom monte
carlo analysis,” in 46th AIAA Aerospace Sciences Meet-
ing and Exhibit, 2008, p. 203.
 P. Williams, “A monte carlo dispersion analysis of the
x-33 simulation software,” in AIAA Atmospheric Flight
Mechanics Conference and Exhibit, 2001, p. 4067.
 D. Petrick, A. Geist, D. Albaijes, M. Davis, P. Spara-
cino, G. Crum, R. Ripley, J. Boblitt, and T. Flatley,
“SpaceCube v2.0 space ﬂight hybrid reconﬁgurable data
processing system,” in IEEE Aerospace Conference
 S. Chien, B. Cichy, A. Davies, D. Tran, G. Rabideau,
no, R. Sherwood, D. Mandl, S. Frye, S. Shul-
man, J. Jones, and S. Grosvenor, “An autonomous earth-
observing sensorweb,” 2005.
 P. Maes, “Agents that reduce work and information
overload,” in Readings in Human–Computer Interac-
tion. Elsevier, 1995, pp. 811–821.
 Z. Ghahramani, “Probabilistic machine learning and
artiﬁcial intelligence,” Nature, vol. 521, no. 7553, p.
 D. Schum, G. Tecuci, D. Marcu, and M. Boicu, “Toward
Cognitive Assistants for Complex Decision Making
Under Uncertainty,” Intelligent Decision Technologies,
vol. 8, no. 3, pp. 231–250, 2014.
 R. Nayak, “Intelligent data analysis: Issues and
challenges,” in 6th World Multi Conferences on
Systemics, Cybernetics and Informatics, 2002. [Online].
 H. Bang, A. Viros, A. Prat, and D. Selva, “Daphne : An
Intelligent Assistant for Architecting Earth Observing
Satellite Systems,” in AIAA SciTech 2018, 2018.
 E. ¨
ozyurt and B. D¨
oring, “A Cognitive Assistant for
Supporting Air Target Identiﬁcation on Navy Ships,”
IFAC Proceedings Volumes, 2012.
 R. Onken and A. Walsdorf, “Assistant systems for air-
craft guidance: Cognitive man-machine cooperation,”
Aerospace Science and Technology, 2001.
 S. A. Wilkins, “Examination of pilot beneﬁts from cog-
nitive assistance for single-pilot general aviation opera-
tions,” in Digital Avionics Systems Conference (DASC),
2017 IEEE/AIAA 36th. IEEE, 2017, pp. 1–9.
 G. Tokadlı and M. C. Dorneich, “Development of a
functionality matrix for a cognitive assistant on long
distance space missions,” in Proceedings of the Hu-
man Factors and Ergonomics Society Annual Meeting,
vol. 61, no. 1. SAGE Publications Sage CA: Los
Angeles, CA, 2017, pp. 247–251.
 D. W. Way, R. W. Powell, A. Chen, A. D. Steltzner,
A. M. San Martin, P. D. Burkhart, and G. F. Mendeck,
“Mars science laboratory: Entry, descent, and landing
system performance,” in Aerospace Conference, 2007
IEEE. IEEE, 2007, pp. 1–19.
 J. A. Grant, M. P. Golombek, J. P. Grotzinger, S. A.
Wilson, M. M. Watkins, A. R. Vasavada, J. L. Griffes,
and T. J. Parker, “The science process for selecting the
landing site for the 2011 mars science laboratory,” in
Planetary and Space Science, 2011.
 J. A. Grant, M. P. Golombek, S. A. Wilson, K. A. Farley,
K. H. Williford, and A. Chen, “The science process
for selecting the landing site for the 2020 mars rover,”
Planetary and Space Science, 2018.
 A. Saltelli and S. Tarantola, “On the Relative Impor-
tance of Input Factors in Mathematical Models,” Jour-
nal of the American Statistical Association, 2002.
Samalis Santini received her B.S in Me-
chanical Engineering from the Univer-
sity of Puerto Rico at Mayaguez in 2016
and her M.S. in Aerospace Engineering
at Cornell University in 2018. She is
currently a Ph.D. graduate student at
Texas AM University in the Aerospace
Engineering department and is a part of
the Systems Engineering, Architecture,
and Knowledge (SEAK) Lab. Her inter-
ests are the application of statistical learning and knowledge
extraction techniques for architecture analysis of Entry, De-
scent, and Landing (EDL) Systems.
Daniel Selva [scale=1] is an Assistant
Professor of Aerospace Engineering at
Texas A&M University, where he directs
the Systems Engineering, Architecture,
and Knowledge (SEAK) Lab. His re-
search interests focus on the applica-
tion of knowledge engineering, global
optimization and machine learning tech-
niques to systems engineering and ar-
chitecture, with a strong focus on space
systems. Before doing his PhD at MIT, Daniel worked
for four years in Kourou (French Guiana) as an avionics
specialist within the Ariane 5 Launch team. Daniel has
a dual background in electrical engineering and aerospace
engineering, with degrees from MIT, Universitat Politecnica
de Catalunya in Barcelona, Spain, and Supaero in Toulouse,
France. He is a member of the AIAA Intelligent Systems
Technical Committee, and of the European Space Agency’s
Advisory Committee for Earth Observation.
David Way is an Aerospace Engineer in
the Atmospheric Flight and Entry Sys-
tems Branch at the NASA Langley Re-
search Center. His area of expertise
is ﬂight mechanics and modeling and
simulation of planetary entry systems.
Dr. Way has a Ph.D. and M.S. in
Aerospace Engineering from the Geor-
gia Institute of Technology and a B.S. in
Aerospace Engineering from the United
States Naval Academy, Annapolis.