PreprintPDF Available

A Cognitive Assistant for Entry, Descent, and Landing Architecture Analysis

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Entry, Descent and Landing (EDL) architecture performance and uncertainty analysis relies heavily on end-to-end simulation given that EDL system verification and validation is limited in Earth environments. Overall system assessment and success criteria evaluation are performed by employing Monte Carlo dispersion analysis. These simulations produce large data sets that are manually analyzed by the subject matter experts, trying to identify correlations between parameters and assessing sensitivity of figures of merit to simulation parameters. Such analysis work is critical, given that it could lead, for example, to the discovery of major flaws in a design. While the subject matter experts can leverage their knowledge and expertise with past systems to identify issues and features of interest in the current dataset, the next generation of EDL systems will make use of new technologies to address the issue of landing larger payloads, and may present unprecedented challenges that may be missed by the human. In this paper, we suggest integrating Daphne, a cognitive assistant , into the process of EDL architecture analysis to support EDL experts by identifying key factors that impact EDL system metrics. Specifically, this paper describes the current capabilities of Daphne as a platform for EDL architecture analysis by means of a case study of a sample EDL architecture for an ongoing NASA mission, Mars 2020. Given that the work presented in this paper is in its early development, the paper focuses on the description of the expert knowledge base and historical database developed for the cognitive assistant, as well as on describing how experts can use it to obtain information relevant to their EDL analysis process by means of natural language or web visual interactions, thus reducing the effort of searching for relevant information from multiple sources.
Content may be subject to copyright.
A Cognitive Assistant for Entry, Descent, and Landing
Architecture Analysis
Samalis Santini De Le´
on
Department of Aerospace Engineering
Texas A&M University
College Station, TX 77843
787-463-5861
ssantini@tamu.edu
Daniel Selva
Department of Aerospace Engineering
Texas A&M University
College Station, TX 77843
979-458-0419
dselva@tamu.edu
David W. Way
NASA Langley Research Center
Hampton, VA 23666
757-864-8149
david.w.way@nasa.gov
Abstract—Entry, Descent and Landing (EDL) architecture per-
formance and uncertainty analysis relies heavily on end-to-end
simulation given that EDL system verification and validation is
limited in Earth environments. Overall system assessment and
success criteria evaluation are performed by employing Monte
Carlo dispersion analysis. These simulations produce large data
sets that are manually analyzed by the subject matter experts,
trying to identify correlations between parameters and assessing
sensitivity of figures of merit to simulation parameters. Such
analysis work is critical, given that it could lead, for example,
to the discovery of major flaws in a design. While the subject
matter experts can leverage their knowledge and expertise with
past systems to identify issues and features of interest in the
current dataset, the next generation of EDL systems will make
use of new technologies to address the issue of landing larger
payloads, and may present unprecedented challenges that may
be missed by the human.
In this paper, we suggest integrating Daphne, a cognitive assis-
tant, into the process of EDL architecture analysis to support
EDL experts by identifying key factors that impact EDL system
metrics. Specifically, this paper describes the current capabil-
ities of Daphne as a platform for EDL architecture analysis
by means of a case study of a sample EDL architecture for
an ongoing NASA mission, Mars 2020. Given that the work
presented in this paper is in its early development, the paper
focuses on the description of the expert knowledge base and
historical database developed for the cognitive assistant, as well
as on describing how experts can use it to obtain information
relevant to their EDL analysis process by means of natural
language or web visual interactions, thus reducing the effort of
searching for relevant information from multiple sources.
TABLE O F CONTENTS
1. INTRODUCTION......................................1
2. BACKGROUND .......................................2
3. DAPHNE .............................................4
4. ADAPTING DAPHNE FOR EDL ARCHITECTURE
ANALYSI S ...........................................4
5. DAPHNE/EDL ARCHITECTURE .....................6
6. CAS E STUDY .........................................8
7. CON CLU SI ONS ..................................... 11
ACKNOWLEDGMENTS ................................ 11
REFERENCES ......................................... 11
BIOGRAPHY .......................................... 12
978-1-5386-6854-2/19/$31.00 c
2019 IEEE
1. INTRODUCTION
Entry, Descent, and Landing (EDL) consists of a series of
events and maneuvers required to land a payload, or vehicle,
on a planet and it is one of the most challenging phases in an
interplanetary mission. On Mars, EDL becomes increasingly
challenging given that the Martian atmosphere is roughly one
hundred times less dense than Earth’s atmosphere. Hence,
EDL systems on Mars must decelerate from hypersonic to
subsonic speeds at low altitudes, reducing the time avail-
able for subsequent events in the sequence to occur [1].
Because replicating the Martian environment is unfeasible,
EDL architecture analysis requires analyzing an umbrella of
architectures with high fidelity simulations to assess perfor-
mance, cost, schedule, and risk under uncertainty [2]. In
addition to the limitations of Earth-based testing, Mars EDL
trajectories are highly coupled to major sources of uncertainty
that include, but are not limited to, vehicle aerodynamics;
launch window; and atmospheric conditions during day-of-
entry events. NASA uses the Program to Optimize Simulated
Trajectories (POST-2) to simulate different entry conditions
under many model parameterizations (e.g. gravity, planetary
geometry, atmospheric, aerodynamic, control system, guid-
ance, and navigation models). POST-2 uses Monte Carlo
dispersion analysis techniques to help users evaluate perfor-
mance, assess mission-level feasibility, identify off-nominal
behavior, and support system design trades, among other
capabilities [3].
POST-2 based Monte Carlo dispersion analyses are employed
as early as Pre-phase A of a mission’s lifecycle. During
this phase, engineers evaluate simple models for conceptual
design studies to identify feasible system concepts, evaluate
alternative architectures, and draft system-level requirements
[4]. Due to the inherent simplicity of the models and system-
level definition of the vehicle used in these early stages
of a mission’s lifecycle, analysis of data produced in these
simulations is relatively simple. However, as a mission life-
cycle progresses up to day-of-entry events, system and model
complexity increase, and so does the size and complexity of
the EDL Monte Carlo simulations [2]. For the Mars Science
Laboratory (MSL), simulations sampled hundreds of input
variables and produced thousands of scenarios [5].
The large datasets produced in simulations are manually
analyzed by the subject matter experts, who try to find inter-
esting correlations and couplings between parameters, and to
assess the sensitivity of figures of merit to various simulation
parameters. This analysis work is important since it may lead
1
to the discovery of a major flaw in a design, for example.
However, the current approach suffers from one important
limitation. Whereas the subject matter expert can leverage
her or his knowledge and expertise on past systems to identify
issues and features of interest in a dataset, the next generation
of EDL systems for landing heavier payloads on the Martian
surface may present unprecedented challenges [6].
Due to the inherent limitations of expert-based analysis, we
believe EDL architecture analysis can benefit from compu-
tational advances to reduce analysis cycle time, minimize
architecture lifecycle costs, and achieve mission success. In
particular, we are interested in incorporating Intelligent Data
Understanding (IDU) technologies (e.g., machine learning)
into the architecture analysis process. However, one of the
limitations of most IDU technologies is poor interactivity
with the user. Many machine learning models and feature
extraction algorithms work essentially as black boxes, which
implies two things: (1) it is hard to interpret and thus trust
their outputs; (2) it is hard to incorporate expert knowledge
into their learning process. In addition, these technologies
often provide more information than the one the end-user
deems relevant or can absorb or make sense of[7]. These
shortcomings can be at least partially overcome with ad-
vanced user-interaction capabilities that allow the subject
matter expert to be in the loop. In this paper, we discuss
one such technology, namely a cognitive assistant (CA) that
can support human-machine interaction specific to the EDL
architecture analysis domain. Specifically, the paper focuses
on the implementation of the knowledge sources into the CA
and the automation and user-interactivity aspects of the CA.
The long-term goal of this work is to advance the state of
the art of offline IDU technologies for architecture analysis
of EDL by incorporating an intelligent assistant that helps
the subject matter expert analyze complex architectures and
communicates critical issues. We want this system to be
able to extract information by means of data-driven and
expert-based knowledge discovery techniques. Ultimately,
we wish to enable mixed-initiative approaches in intelligent
data understanding. This paper describes our first steps in
this direction.
The remainder of the paper is structured as follows. Section
2 provides an overview of the EDL architecture analysis
process, its challenges, and the rationales for exploring IDU
technologies. Section 3 introduces Daphne, a cognitive assis-
tant developed for Earth-observing satellite architecting prob-
lems. Section 4 describes how Daphne has been adapted and
extended for EDL architecture analysis. Section 5 describes
the current Daphne/EDL architecture. Section 6 presents a
use-case scenario for utilizing Daphne for EDL architecture
analysis. Finally, Section 7 describes the current limitations
of the existing implementation and the plans for future work.
2. BACKGROUND
The EDL Architecture Analysis Process
High-level decisions made during the creation and definition
of a system architecture are critical given that they com-
mit most of the system’s lifecycle costs and they define
the system’s behavior, complexity and emergence properties
(e.g. robustness, scalability, flexibility, reliability). However,
this task becomes increasingly complex for EDL. Due to
the inherent limitations of Earth-based testing, Mars EDL
architecture analysis for NASA missions have relied up to
the present day on computer simulations to gain insight into
a variety of complex entry problems. These simulations
are constructed from a library of deterministic models that
have been refined throughout the years to support different
vehicle systems (e.g. Space Shuttle, Mars Pathfinder, MSL).
These include but are not limited to vehicle-specific models
(e.g. aerodynamics, control system, guidance and navigation
models); and planetary-specific environment models (e.g.
gravitational, planetary geometry, atmospheric models) [2].
Due to the deterministic nature of the models available in
POST-2, high fidelity POST-2-based Monte Carlo analysis is
critical for supporting system design, integration and oper-
ations throughout a mission’s lifecycle. Simulation results
help identify areas of risk associated with certain mission
phases that result from randomly varying entry conditions
(e.g. entry interface, atmospheric conditions) and varying
vehicle configurations (e.g. lift-to-drag ratio, entry flight
path angle) and help quantify the robustness of a given EDL
architecture.
Figure 1 shows an overview of the EDL-domain architecture
Monte Carlo analysis process every time a simulation is
conducted. Simulation results contain 8001 cases. Each
case is one vehicle configuration that results from hundreds
of randomly generated model parameters and contains thou-
sands of output parameters. Examining individually values
and statistics of all thousands of output parameters is time-
consuming and prone to failure of identifying all relevant fea-
tures. Consequently, NASA uses a Scorecard to summarize
the simulation outputs from a particular simulation for com-
parison against project performance metrics. The “scorecard”
is a type of summary report that describes mission-specific
system performance metrics, the main simulation results (e.g.
percentiles, means), threshold values, and whether the results
satisfy the system requirements. EDL teams can examine
the scorecard and identify the metric requirements that are
not being satisfied and explore specific cases that might be
contributing to a particular system behavior. Simulation ex-
perts also examine the packages of plots and identify potential
outliers. For example, interesting cases to look at would be
points that fall out of the landing ellipse. This second step
however, often requires examining hundreds of plots, and
potentially hundreds of statistics of individual variables in
an attempt to identify all driving features. For the selected
cases (such as flagged, or out of spec), experts often plot
the trajectory of these in an attempt to explain the system’s
behavior.
During this process there are several common questions that
arise. Often these questions concern identifying the source
or sources of a particular observed behavior or why case Y
behaves differently than case Z, for example. However, to
answer these questions, experts must navigate through large
datasets with the objective of identifying potential features
of interest and commonalities between cases. However, this
task is very extensive and time consuming. More specif-
ically, all of the tasks enclosed by a red rectangle require
that that experts leverage their knowledge and expertise to
manually identify features of interest and critical parameters
in a dataset. There is no prescription for the analysis of such
complex simulation results and unfortunately, this task often
relies on the team’s expertise of the system under study [8].
This research seeks to help experts answer these questions.
Up to the present day there is no prescription for the analysis
of such complex simulation results and unfortunately, this
task often relies on the team’s expertise of the system under
study [8]. For fully integrated six degrees-of-freedom vehicle
2
Figure 1.EDL Architecture verification and validation
process [2].
simulations in other domains of applications, the procedure is
similar to the one described. To examine simulations experts
typically: examine the statistics of simulation outputs (e.g.
peak deceleration and altitude at which peak deceleration
occurs); attempt to identify sensitivities of outputs to inputs;
and identify cases that fail to satisfy system requirements [9],
[10]. For example, percentiles in statistics of key perfor-
mance parameters are looked at to verify that X-percent of
all the cases satisfy the system requirements (e.g., timeline
margin, fuel consumption). Identifying sensitivities on the
other hand, is often achieved by means of scatter plots of the
output variables against individual input parameter distribu-
tions. This process would have to be repeated for each input-
to-output relationship the expert is interested in.
The most common technique employed for sensitivity anal-
ysis in the EDL domain is the “one-at-a time” approach. In
other words, sensitivities are calculated by varying a single
input variable (or in this case, a set of related variables) in
a model while maintaining all other inputs of all models at
their nominal values. For example, to assess the sensitivity
of aerodynamics in the POST-2 trajectory analysis, a Monte
Carlo simulation is executed with uncertainty in aerodynam-
ics model parameters while maintaining all other models
(e.g., gravitation, entry conditions) at their nominal values.
This process is repeated for all models, and sensitivities to
each model are compared by means of scatter plots and
3σdispersion analysis. Although this approach is straight-
forward, it fails to identify dependencies and interactions
between input variables.
Intelligent Data Understanding Technologies
Up to the present day, IDU technologies have been commonly
employed to support mission operations, but their use in
performance analysis during mission development has not
been explored. In particular, these technologies have been
employed by NASA for on-board data processing and anal-
ysis capabilities. Some of the advantages of IDU for these
applications include automated detection of events (e.g., an
anomaly in a telemetry sensor, or the presence of a fire in an
image of a forest), and automated on-board intelligent actions
responding to those events (e.g., switching the spacecraft to
safe mode, or slewing the spacecraft to observe the fire for
a longer time). The state of the art for these technologies is
set by NASAs Space Cube 2.0 and NASAs Earth Observing
1 (EO-1) spacecraft. The Space Cube 2.0 possesses data
reduction and on-board processing capabilities that provide
the system with first-responder real-time awareness [11].
Similarly, NASAs EO-1 conducts on-board planning and
scheduling that enables the system to detect science targets
and change the plan accordingly [12].
Another advance in automated intelligent data understanding
technologies that is outside of the space domain but perhaps
more relevant to our efforts is the Automated Statistician
(http://www.automaticstatistician.com). This tool seeks to
automate all aspects of understanding and explaining data.
The Statistician’s focus is to build models from the data
provided, generate explanations and deliver the knowledge
extracted to the user in the form of a report in natural
language.
One drawback of such approaches is that the end user has to
navigate through a large amount of automatically generated
information to find aspects of the data she/he is interested
in, potentially resulting in information overload [13]. We
argue that tools such as the Automated Statistician can benefit
from more interactivity with the user, for example to allow
him or her to interactively specify the type of information,
family of models, or region of interest in a dataset for further
analysis. The interaction between the human and the tool can
thus be enhanced by means of cognitive assistants with ad-
vanced dialogue and interactive capabilities. Others have also
emphasized the potential of human-in-the-loop data mining
tools to improve the process of knowledge extraction [14].
Cognitive Assistants
Cognitive assistants have been explored as a viable platform
to provide decision-making support to experts in the face of
uncertainty. In addition, they provide one of the capabilities
IDU technologies lack: advanced user interactivity. Unlike
other artificial intelligence tools and applications, CAs can
obtain domain-specific knowledge in ways that follow a
teacher-apprentice approach[15]. Hence, a CA can learn
from“rules of thumb” of dos and don’ts for a domain-specific
application. However, they can still exploit AI and data analy-
sis techniques that conform the essence of IDU technologies,
to quantify the probabilities and states of a particular decision
[16]. A CA can be useful for identifying features of interest in
a design; analyzing and communicating the findings to team
members; providing historical or contextual information; and
more generally reducing cognitive load on the team members
[17]. In the context of EDL, CAs can help experts identify
anomalies, features of interest, and extract knowledge that
could potentially not be attainable by manually examining
a simulation data set of the architecture under evaluation,
for example. To exploit the interactivity features CAs are
characterized for, EDL teams can specify to the assistant
aspects of the data they are interested in and what type of
analysis to conduct.
At the moment, CAs in the aerospace domain have been
mostly created with the intent of providing situational aware-
ness and subsequent operational decisions making tasks. For
example, COGAS, a CA, supports NAVY ships in air target
identification. COGAS makes use of sensor information and
a-priori expert knowledge contained in their models (e.g.,
operator activities in their work domain) to process acquired
data, identify and analyze the system’s state, establish system
goals, and activate the appropriate procedure [18]. Other CAs
within this application domain include the Crewed Assistant
3
Military Aircraft (CAMA) and the Digital Copilot [19], [20].
Along these lines, interest has arisen in integrating CAs for
supporting astronaut crew during missions beyond Low Earth
Orbit (LEO), especially in off-nominal conditions when there
is a long communication delay between Earth and the space
vehicle. As for the previous examples presented, a CA for
space crew support- with some level of automation- should
have the capabilities to diagnose a problem, provide recom-
mendations to the the crew during emergency situations based
on previous knowledge, evaluate the diagnoses, perform risk
trade-offs, and evaluate and generate procedures [21].
3. DAPHNE
Daphne is a CA that specializes in system architecting prob-
lems for Earth observing (EO) satellite systems. The main
goal of Daphne is to help experts in the architecture analysis
process by providing relevant information, advice, and feed-
back that address strengths and weaknesses of a particular
design [17]. These capabilities help minimize the cognitive
load on experts by reducing the need to manually search
through multiple sources of information.
Daphne Architecture
Figure 2 shows Daphnes architecture for EO satellite archi-
tecture analysis, consisting of four layers. The first layer, the
front end, serves as a platform for the user to interact and
communicate with Daphne. Requests made either in natural
language or through the web visual interface are passed on to
the Daphne’s brain, the server, where the request is forwarded
to the respective back-end modules. After the information
requested is retrieved from the back-ends, the Daphne server
returns the response to the user through the web interface.
Questions or requests made to Daphne are processed through
HTTP or Websocket requests. Questions made in natural
language form are directed to the Sentence Processor, which
makes use of a Convolutional Neural Network (CNN) to
classify the question, and directed to the “skill” required to
answer that particular type of question. For example, a ques-
tion such as “what missions were launched in 2018?” would
be classified as a question for the Historian skill. Each “skill”,
or role, makes use of multiple algorithms and knowledge
extraction techniques at the back-end that extract knowledge
from the data sources available. Knowledge available in
Daphne is stored in three primary data sources: a historical
database, an expert-knowledge base, and the current dataset.
Daphne Skills—Daphne has four primary capabilities: 1) the
Analyst, 2) the Critic, 3) the Historian, and 4) the Explorer.
The analyst is in charge of answering questions about the
design under analysis. The critic skill takes the proposed
design and provides feedback to the users about the strengths
and weaknesses of the design. In addition, it provides
suggestions on how to improve the design. Critiques come
from the information available in the Expert Knowledge Base
in the form of rules of thumb (rule-driven), from a historical
database of past missions (legacy-driven), or from the cur-
rent dataset (data-driven). The Historian provides historical
information on previous missions and can be used during the
design process to check whether selected parameters and the
design being evaluated are similar to those of past designs.
Example questions include “What is the most common orbit
for ice cloud detection?”. Finally, the Explorer executes
a genetic algorithm in the background. As Daphne finds
solutions that improve the current Pareto front, Daphne asks
the user if she/he wants these solutions in the current dataset.
4. ADAPTING DAPHNE FOR EDL
ARCHITECTURE ANALYSIS
This section describes how Daphne’s capabilities have been
extended to the EDL architecture analysis domain and how
Daphne can be of use for the EDL architecture analysis
process. Defining a complex system such as a satellite
or planetary mission requires end-to-end simulation models
to simulate the system’s behavior and interactions with its
surrounding to a sufficient level of detail that enables experts
to quantify the system’s performance. However, given that
the performance metrics of interest in an EDL architecture
analysis problem (e.g. altitude performance, peak entry
environments) and nature of the simulation (6-DOF trajec-
tory) are distinct from those in the Earth observing satellite
architecture problem, it was necessary to incorporate EDL
data/knowledge sources. These include:
1. A historical database containing system performance met-
rics of past EDL missions.
2. An expert knowledge base that contains rules of thumb
and analysis criteria for EDL.
Historical Database
The EDL historical database was created to provide subject
matter experts with information about previous EDL mis-
sions. However, unlike for Earth observing satellite missions,
there is no online database of previous EDL mars missions to
support the coordination of EDL architecture analysis for fu-
ture planetary missions. Furthermore, creating a database in
the EDL domain is challenging due to the number of variables
involved in these complex multibody vehicle systems. Conse-
quently, we established two requirements for the implemen-
tation of the EDL database. First, the database shall contain
descriptive information about mechanisms employed during
the EDL sequence of each mission. Some of these are, for
example, type of entry (direct/orbit), entry lift control (center-
of-mass offset/no offset), entry guidance (unguided/guided),
and descent attitude control (RCS roll rate/none), among
others. Such information can provide experts with contextual
data when examining metrics of different architectures. And
second, the database should contain information that is shared
across EDL architectures. This consideration is driven by the
fact that limited information is available from past missions
and that comparison across missions can only be achieved
if different vehicle systems can be described using com-
mon performance metrics. For example, although different
missions have employed different mechanisms for entry lift
control, common performance metrics include peak deceler-
ation and peak heat rate, among others. Thus, the resulting
database contains system performance metric drivers that can
be traced to level-1 requirements shared across missions.
These EDL system performance drivers are captured by six
overarching themes depicted on Table 1 and are described in
Ref.[22]: altitude performance, range to target, time on radar,
peak entry environments, wind sensitivity, and propellant use.
Expert Knowledge Base
The Scorecard discussed in Section 2 was used as the expert
knowledge base for the EDL skill given that it provides a
standardized knowledge repository that is shared among all
EDL groups. The scorecard provides a dictionary between
natural language-form descriptions and mathematical models
Daphne can make use of for analysis and calculations. For
example, the metric described as fuel consumption contains
a fixed number of entries, each containing the flag and out
of spec values, units, description of the metric, the POST-
2 results, and the calculation required to obtain the metric
4
Figure 2.Daphne Architecture for EO.
value. In addition, it contains thresholds and conditions that
can be translated into rules by employing “if-then” statements
and quickly identify mission-specific requirements that are
not satisfied - or close.
Tailoring Daphne to EDL architecture analysis needs
Types of Information of Interest in the Context of EDL—
Establishing what the types of information subject matter
experts deem relevant during the EDL architecture analysis
process was done by two lines of investigation: literature
reviews and discussions with one of the authors, an EDL
expert. Knowledge and information extracted during this
process were used to develop a set of questions/commands
for Daphne should respond in order to assist the experts.
A survey of Monte Carlo Techniques employed for the analy-
sis of Monte Carlo simulation outputs discussed in Section
2 suggested that subject matter experts are predisposed to
acquire a sense of the statistic of variables of interest and
their sensitivities [9][10]. Experts will then be inclined to
identify stressing cases (e.g. flags and out of spec) for
further investigation. In the process of identifying features of
interest, the analysis is driven by comparing stressing cases
to nominal cases in an attempt to identify commonalities and
differences between them. During this process, experts make
use of visual aids (e.g. variable plots and statistical plots)
and conduct extensive search of the dataset for identifying
distinctive features that explain the system’s behavior.
From the frequent discussions held with an expert in EDL
end-to-end simulation analysis, we generated a set of prelimi-
nary question types (QT) and actions (AC) that emerge during
the analysis process discussed:
QT: What are the statistics (e.g. mean, min, max, 99%-tile)
of parameter X ?
QT: Is parameter X correlated with parameter Y ?
QT: How is the result from mission/simulation A different
from mission/simulation B ?
QT: Why is case X failing ?
QT: What do cases A to C have in common ?
AC: Find the value of parameter X for a mis-
sion/simulation.
AC: Plot statistics (e.g. histogram, quad-quad plot, CDF).
AC: Plot parameter X vs. parameter Y.
AC: Identify a stressing case.
AC: Plot the evolution of a parameter over time, possibly
across missions.
Use Cases of Daphne— A survey of the EDL architecture
analysis process helped identify two use cases in which
Daphne can be of aid to experts: 1) by reducing the cogni-
tive load and the manual labor of having to search through
multiple sources of information and 2) by providing analysis
and insights on a particular architecture.
The first item mentioned is relevant for individual and col-
lective analysis of EDL architectures. For example, due to
the human-like nature of CAs, Daphne could be incorporated
into a collective setting where experts discuss the results of
metrics from multiple simulations (e.g. different landing
sites) and assess system performance of each. At the mo-
ment, this task requires that experts search for the relevant
simulation data set and extract the values of the metric(s)
they are interested in. In some cases, additional calculations
are required. This process is repeated for each simulation.
Hence, we envision that Daphne could do this for the user. By
means of natural language, the subject matter expert can ask
Daphne for the results she/he is interested in without going
through the manual labor of searching and loading each data
set and calculating the metric of interest.
5
Table 1. EDL System Performance Metrics [22]
System Performance Theme Description Performance drivers (examples)
Altitude performance
Good altitude performance enables
the system to land on higher elevation
sites.
Parachute deploy altitude,
entry velocity, atmospheric
density, arrival geometry,
entry mass
Range to target Distance to science target.
Entry guidance, initial navigation
errors, range error,
parachute deploy altitude
Time on radar
Direct measure of timeline margin:
time until conditions for radar acquisition
and backshell separation are met.
Heat shield jettison, altitude,
spacecraft off-nadir angle
Peak entry environments
Critical peak conditions include:
peak deceleration, dynamic
pressure, heat rate, heat load,
shear stress.
Entry velocity, mass, atmospheric
density, entry angle of attack
Wind sensitivity Presence of tail-winds result in
parachute deploy altitude loss
Mach number estimation error:
velocity error, speed of sound
error, wind error
Propellant use
On-board propellant is a fixed quantity
and must be closely
tracked
Velocity losses: gravity loss, cosine loss.
Along these lines, we envision that Daphne can analyze and
identify critical information in a simulation and communicate
the findings to the user. For example, Daphne can identify
critical parameters that drive a particular architecture’s behav-
ior as well as extract and compare features of interest across
missions or simulations.
5. DAPHNE/EDL ARCHITECTURE
Figure 3 presents the current implementation of Daphne and
the respective front-ends, back-ends, and data sources for the
EDL role. Operations of Daphne remain unchanged from
those discussed in Section 4. In the current implementation of
Daphne, all EDL-related requests made in the user interface
are directed to the Daphne server through HTTP/Websockets,
a bi-directional line of communication established between
the client and the Daphne brain. EDL-related queries or
commands are processed by the Sentence Processor’s CNN
and classified as an EDL role. Requests are then processed
by the EDL query builder. The Query Builder uses JSON file
templates to identify the type of query, extract the features of
interest in the query (e.g., mission name, parameter name),
and direct the query to the respective executable functions
and data sources used to generate the response. A response is
then created and directed back to the client.
Front-end
A stand-alone visual interface was created for EDL-type of
queries. Largely based on the interface created for Daphne for
EO missions, the visual interface contains a panel where the
user can write a question or request to Daphne –the user can
also just speak. The other two panels are the plot and answer
panels. The plot panel provides a visualization in response to
the data users request. This plot is interactive, so users can
hover over the data and obtain the value of a specific point.
At the moment, Daphne can present two types of plots. The
first plot shown in Figure 4 is a statistical plot. This plot is
generated when a user asks about the statistics of a particular
metric from a specific simulation file. The plot contains a
histogram of the relevant data and its cumulative distribution
function.
The second plot is a scatter plot that is automatically gener-
ated by Daphne when the user requests a plot of two metrics
from a particular simulation file. The user can hover over the
scatter plot to see the exact values in the x-axis and y-axis as
well as the case number of that data point. Knowing the case
number of data points is useful for identifying stressing cases.
Finally, the panel below the plots displays responses in
natural language form. Use cases of Daphne for EDL are
described in more detail in the following section by means of
a case study.
Daphne Brain
The EDL assistant accepts questions in natural language
form. Figure 5 presents the question classification process.
As depicted in the figure, requests from the user are classified
into either commands or questions. In this case, we assume
that the user asks Daphne “What was the entry velocity for
MSL?”. Once the request is classified as a question, Daphne
proceeds to classify the request by type (e.g., EDL). This
task is achieved by means of a Convolutional Neural Network
(CNN). The existing algorithm in Daphne was retrained to be
able to classify questions regarding EDL. Whenever this role
is active, Daphne executes a MATLAB engine that is used
to support the skill. Reference [17] contains a more detailed
description of the CNN model implemented in Daphne.
At the moment, all of the EDL questions are contained within
the EDL role. However, the EDL capability is not meant to be
a role like the Analyst or the Critic. Rather, the intention is to
be able to use the existing skills for EDL simulation datasets,
by simply modifying the sources of data. However, for this
first prototype, all EDL-related queries are addressed by the
EDL role.
Following question classification by type, Daphne searches
for the information requested in the query. JSON file
templates available in Daphne specify the name/value pairs
required to respond to a particular query and are used to
6
Figure 3.Current Daphne Architecture.
Figure 4.Daphne interface for EDL architecture analysis.
7
Figure 5.Daphne data extraction process.
search for the features requested. For the query “What
was the entry velocity for MSL?”, we want to extract two
features: mission name and parameter. Feature extractors
match the sentences to lists of known values for the requested
information. Daphnes implementation of the statistical model
provided by Sellers et al., algorithm accounts for mistakes
(e.g. typos) in the users request [30]. In this case, features are
extracted from the historical database in the query section of
the template. Finally, after features are extracted, results are
embedded into the template response. The response is then
returned to the user at the front end through voice or through
the visual response template.
Capabilities
Although in the current implementation, all EDL analysis
capabilities are contained in a single role labeled EDL,
Daphne for EDL architecture analysis is currently a historian
and analyst of sorts. Furthermore, the current capabilities
help automate the task of manually searching for information
concerning previous EDL architectures as well as information
from EDL simulation datasets. Daphne can handle questions
regarding historical information, whether a particular simula-
tion satisfies performance requirements and can provide basic
statistics on EDL parameters and metrics.
Backends
At the moment, Daphne for EDL does not make use of any
machine learning algorithms for evaluating an EDL architec-
ture. For EO, Daphne makes use of an architecture evaluation
algorithm, data mining capabilities, a genetic algorithm and
a clustering algorithm. As a part of future work, we plan on
incorporating capabilities for architecture evaluation and data
mining for obtaining insight on features of interest in an EDL
simulation dataset.
The data sources described in the adaptation of Daphne
EDL (historical database and expert knowledge base) were
incorporated into the current Daphne architecture. Any
historical information is directed to the historical database,
whereas commands such as ”calculate the landing ellipse”
are directed to the EDL scorecard template to extract the
equations required to analyze the entry problem at hand.
The MATLAB engine connected to Daphne is in charge of
performing the calculation requested and directing the result
back to Daphne. Furthermore, if the user is interested in
examining the scorecard for a particular simulation, upon
request, Daphne can create one using the template available.
With the scorecard stored in Daphne’s working memory, the
user can request information about simulation results and
whether they satisfy system requirements. For example, users
can ask what metrics in the scorecard are flagged and Daphne
returns a list of the flagged metrics and the simulation results
compared against the requirements.
6. CASE ST UDY
Context and Goals for the Case Study
Landing site selection for interplanetary missions is largely
driven by science objectives [23]. However, it is also con-
strained by the system’s ability to land safely on the target
region. For the upcoming Mars 2020 mission, a team of
scientists narrowed down the list of candidate landing sites to
three: Columbia Hills Northeast Syrtise, and Jezero Crater.
Engineering and science operations were the primary criteria
for this selection. A fourth site was added to the list later:
Midway (MDW). This landing site lies between Northeast
Syrtise (NES) and the Jezero (JEZ) crater and provides an
opportunity to collect high-content science data from both
sites [24].
In the case study, we will present the current capabilities of
Daphne for EDL architecture analysis using the simulation
for the Midway landing site. The work discussed in this
section represents a first prototype of the CA and emphasizes
the automation capabilities of Daphne, as opposed to those
related to generating truly new insights. This case study will
also support the task of identifying capabilities that need to
be added to Daphne/EDL in the future.
Use Case Scenario
In the scenario presented, we assume that the expert is inter-
ested in examining the outputs of the Mars 2020 architecture
with Midway as its target landing site. This mission inherited
the MSL EDL architecture segments (Figure 6) of: guided
entry, parachute descent, powered descent, skycrane maneu-
ver, and flyaway. Furthermore, Mars 2020 has incorporated
terrain relative navigation (TRN) as a new EDL technology
that provides capabilities for hazard avoidance and landing
accuracy improvement.
For the sake of demonstrating some of the automation tasks
of Daphne, the dialogue presented assumes that the end user
is an expert in the field of EDL. Hence, the dialogue presented
will follow the process a Simulator would likely follow.
Figure 7 illustrates a sample the dialogue between Daphne
and the Simulator.
In this scenario, the user interested in examining outputs of
a simulation. The Scorecard is used to rapidly examine all
relevant metrics for the mission. The EDL expert requests
Daphne to load the simulation file and generate a Scorecard
8
Figure 6.Mars 2020 EDL architecture.
for this landing site. Daphne returns to the front-end that the
simulation file has been loaded and that the Scorecard has
been generated. The scorecard generation task is achieved by
executing the corresponding scripts and templates the current
EDL teams use to create the scorecard.
With the scorecard stored in Daphne’s context, the expert
then asks Daphne what metrics in the Scorecard are flagged.
Daphne returns a list of metrics that are flagged along with
the values that require attention. Along the same lines, the
user can request to view what metrics are out of spec –i.e., do
not satisfy system requirements.
One way to further examine these metrics of interest, such as
the flagged metric “peak inflation axial load”, is by means of
visualization. For example, the expert may request Daphne
to plot parachute full inflation load as a function of time for
full inflation. Daphne returns a scatter plot where the user is
given the option to hover over the data points to visualize
the detailed values and the case number depicted in that
data point. In this example, when the user hovers, Daphne
displays:“x:794.33, y: 252,037.07, case : 2789”. Obtaining
such value is useful for further examining specific cases of
interest.
To obtain additional information about the metric of interest
at the moment (parachute maximum inflation load), the user
then requests Daphne the statistics of the metric. Daphne re-
turns to the front end an interactive histogram and cumulative
distribution function for the user along with detailed statistics
(e.g. mean, min, max).
As stated in Section 2, simulation results are often compared
to other simulations as well as historical information on
relevant metrics. In this case, we assume the expert is
interested in comparing simulation results to those for the
NES landing site. Functions incorporated in Daphne allow
for experts to ask about the results of metrics from other
simulations. For example, the user can ask Daphne “for
the NES simulation file, calculate peak inflation axial load.”
This way the user does not have to go through the process
of generating a scorecard and search for the relevant metric.
Daphne responds to the user with “the peak inflation axial
load is = 62.42 lbs.” If the scorecard is already available,
the user also has the option of asking for the values of
other metrics for the scorecard under examination: “from
the scorecard name, what are the POST results for parachute
deploy Mach number ?” Daphne responds with the values
available. In some cases, more than one result is available.
As in this example, results for parachute deploy Mach number
are expressed in percentiles with multiple values, and all are
delivered to the end user.
Assuming that the expert is interested in comparing simula-
tion results of the parachute deploy Mach number to previous
missions, the user can ask so to Daphne. When the expert
asks “for MSL, what was the parachute deploy Mach number
?”, Daphne forwards the request to the EDL role who directs
the query to the historical database.
Opportunities for Daphne EDL
Based on the use cases presented, there are several oppor-
tunities to improve Daphne’s capabilities. As observed in
Figure 7, Daphne can extract results from simulation datasets,
whether its through Matlab or through the scorecard, and
identify metrics that are flagged or out of spec. The scatter
plot provides a visualization of the results and the user can
hover over outliers to obtain case numbers. At the moment,
Daphne only provides the case number. However, Daphne
can potentially record the case numbers a user selects for
further examination. For example, Daphne could load the
trajectories for the cases specified. The user could then visu-
alize these trajectories in an attempt to identify any abnormal
behavior. To further exploit the data available, Daphne can
aide the user in this task by making use of sensitivity analysis
and data mining techniques to provide insight on the driving
features for a particular system behavior.
Extending sensitivity analysis beyond the one at-a-time ap-
9
Figure 7.Daphne use case: sample dialogue between a Simulator and the Daphne CA.
10
Figure 8.Daphne use case (continued).
proach is particularly interesting given that other methods can
take into account the simultaneous variation of inputs. Global
variance-based sensitivity methods such as the Sobol-Method
are especially promising given that they allow full exploration
of the input space and account for variable interactions [25].
Explanation abilities of Daphne/EDL can be incorporated
using statistical learning techniques already available for
Daphne EOs. Daphne/EO uses Association Rule Mining
(ARM) techniques for knowledge extraction. As the name
suggests, ARM techniques finds statistical associations be-
tween elements in a dataset and represents them in the form
of logical rules FG, which are interpreted as “whenever
Fis true, then Gis also likely to be true”, where F, G
are any binary features, such as “entry mass being greater
than 1000kg”, or “having an out of spec on the amount of
fuel remaining at DSI”. The quality of these rules can be
assessed by means of importance measures such as support
and confidence. Support refers to the frequency of the rules
applicability to a given data set. Confidence determines the
frequency of the items in the dataset containing Fthat also
contain G. In other words, how good a predictor Fis of G.
Given that knowledge extraction by means of ARM comes
in the form of rules, knowledge is easily comprehensible for
the end-user. Data mining of one or multiple datasets can
identify patterns, and the cognitive assistant can deliver the
knowledge extracted in the form of several such rules.
Finally, as seen in the second section of the use case, Daphne
can extract information across different simulations. To
enhance the analysis capabilities of Daphne, we envision the
system possess the ability to compare metric values across
these simulations. For example, “how different is peak
inflation load in the MDW landing site vs the one obtained for
NES.” Example expected answers include “the value of peak
inflation load for NES is significantly greater than that for
MDW” or “the value of NES is not statistically significantly
different from the one obtained in MDW”.
7. CONC LUSI ONS
In this paper, we introduced a CA for EDL architecture
analysis based on Daphne, an existing CA that specializes
in supporting the design of satellite constellations. The main
objective is to use Daphne as a platform for IDU technologies
for performance analysis of the next generation of EDL
architectures. Expanding Daphne’s capabilities will serve to
aid EDL design teams to rapidly evaluate EDL performance
metrics, identify high information content data and improve
the uncertainty characteristics of inferences made by making
use of multiple sources of knowledge. With the current
implementation of Daphne, we have showed by means of a
case study that Daphne can handle EDL-domain data and can
reduce cognitive load of EDL experts by providing relevant
information about the data under examination in a timely
manner. Daphne automates many of the steps required to
obtain such information. For example, we demonstrated that
Daphne can load simulation data sets for the user, provide
values, statistics, and calculations of variables of interest.
In addition, Daphne can provide visualizations of the data.
Daphne can also generate a scorecard upon request and can
search for historical data on past EDL architectures through
the EDL database.
The ability to communicate through natural language by
means of text or verbal requests makes Daphne a good
candidate for being employed in a team setting. For example,
experts discussing multiple simulations can turn to Daphne
for searching any information that is not readily available
in their results packet. However, because Daphne for EDL
architecture analysis is still in its early stages, the information
extracted does not necessarily provide any truly new insights
on the effect of variables and/or combinations of variables
in the metric values, for example. In other words, Daphne
tells us what we already know or could easily know, but
with less work to obtain this information. The next steps
are to incorporate analysis capabilities to Daphne for EDL
architecture analysis and try to obtain new information that
“we really don’t know” Future work aims to address this task
by incorporating sensitivity analysis and machine learning
techniques.
ACKNOWLEDGMENTS
The authors would like to thank the NASA Science and
Technology Research Fellowship (NSTRF) for funding this
work. The authors would also like to thank Antoni Vir´
os,
the main developer of Daphne, for his support with adapting
Daphne to the EDL domain.
REFERENCES
[1] R. Braun, M., Manning, “Mars Exploration Entry, De-
scent, and Landing Challenges,” Journal of Spacecraft
and Rockets, vol. 44, no. 2, pp. 310–323, 2007.
[2] S. A. Striepe, D. W. Way, A. M. Dwyer, and J. Balaram,
“Mars Science Laboratory Simulations for Entry, De-
scent, and Landing,” Journal of Spacecraft and Rockets,
2006.
[3] D. W. Way, J. L. Davis, and J. D. Shidner, “Assessment
of the Mars Science Laboratory entry, descent, and
landing simulation,” in Advances in the Astronautical
Sciences, vol. 148, 2013, pp. 563–581.
[4] S. J. Kapurch, “NASA Systems Engineering Hand-
book,” NASA Special Publication, 2007.
[5] M. a. K. Lockwood, R. W. Powell, K. Sutton, R. K.
Prabhu, C. A. Graves, C. D. Epp, and G. L. Carman,
“Entry configurations and performance comparisons for
the mars smart lander,Journal of spacecraft and rock-
ets, vol. 43, no. 2, p. 258, 2006.
[6] G. Wells, J. Lafleur, A. Verges, K. Manyapu, J. Chris-
tian, C. Lewis, and R. Braun, “Entry, descent, and land-
ing challenges of human mars exploration,” in Advances
in the Astronautical Sciences, 2006.
[7] Z. Ghahramani, “Probabilistic machine learning and
artificial intelligence,” 2015.
[8] C. I. Restrepo and J. E. Hurtado, “Tool for rapid analysis
of monte carlo simulations,” Journal of Spacecraft and
Rockets, vol. 51, no. 5, pp. 1564–1575, 2014.
11
[9] E. Baumann, C. Bahm, B. Strovers, R. Beck, and
M. Richard, “The x-43a six degree of freedom monte
carlo analysis,” in 46th AIAA Aerospace Sciences Meet-
ing and Exhibit, 2008, p. 203.
[10] P. Williams, “A monte carlo dispersion analysis of the
x-33 simulation software,” in AIAA Atmospheric Flight
Mechanics Conference and Exhibit, 2001, p. 4067.
[11] D. Petrick, A. Geist, D. Albaijes, M. Davis, P. Spara-
cino, G. Crum, R. Ripley, J. Boblitt, and T. Flatley,
“SpaceCube v2.0 space flight hybrid reconfigurable data
processing system,” in IEEE Aerospace Conference
Proceedings, 2014.
[12] S. Chien, B. Cichy, A. Davies, D. Tran, G. Rabideau,
R. Casta˜
no, R. Sherwood, D. Mandl, S. Frye, S. Shul-
man, J. Jones, and S. Grosvenor, “An autonomous earth-
observing sensorweb,” 2005.
[13] P. Maes, “Agents that reduce work and information
overload,” in Readings in Human–Computer Interac-
tion. Elsevier, 1995, pp. 811–821.
[14] Z. Ghahramani, “Probabilistic machine learning and
artificial intelligence,” Nature, vol. 521, no. 7553, p.
452, 2015.
[15] D. Schum, G. Tecuci, D. Marcu, and M. Boicu, “Toward
Cognitive Assistants for Complex Decision Making
Under Uncertainty,Intelligent Decision Technologies,
vol. 8, no. 3, pp. 231–250, 2014.
[16] R. Nayak, “Intelligent data analysis: Issues and
challenges,” in 6th World Multi Conferences on
Systemics, Cybernetics and Informatics, 2002. [Online].
Available: https://eprints.qut.edu.au/1479/
[17] H. Bang, A. Viros, A. Prat, and D. Selva, “Daphne : An
Intelligent Assistant for Architecting Earth Observing
Satellite Systems,” in AIAA SciTech 2018, 2018.
[18] E. ¨
ozyurt and B. D¨
oring, “A Cognitive Assistant for
Supporting Air Target Identification on Navy Ships,
IFAC Proceedings Volumes, 2012.
[19] R. Onken and A. Walsdorf, “Assistant systems for air-
craft guidance: Cognitive man-machine cooperation,
Aerospace Science and Technology, 2001.
[20] S. A. Wilkins, “Examination of pilot benefits from cog-
nitive assistance for single-pilot general aviation opera-
tions,” in Digital Avionics Systems Conference (DASC),
2017 IEEE/AIAA 36th. IEEE, 2017, pp. 1–9.
[21] G. Tokadlı and M. C. Dorneich, “Development of a
functionality matrix for a cognitive assistant on long
distance space missions,” in Proceedings of the Hu-
man Factors and Ergonomics Society Annual Meeting,
vol. 61, no. 1. SAGE Publications Sage CA: Los
Angeles, CA, 2017, pp. 247–251.
[22] D. W. Way, R. W. Powell, A. Chen, A. D. Steltzner,
A. M. San Martin, P. D. Burkhart, and G. F. Mendeck,
“Mars science laboratory: Entry, descent, and landing
system performance,” in Aerospace Conference, 2007
IEEE. IEEE, 2007, pp. 1–19.
[23] J. A. Grant, M. P. Golombek, J. P. Grotzinger, S. A.
Wilson, M. M. Watkins, A. R. Vasavada, J. L. Griffes,
and T. J. Parker, “The science process for selecting the
landing site for the 2011 mars science laboratory,” in
Planetary and Space Science, 2011.
[24] J. A. Grant, M. P. Golombek, S. A. Wilson, K. A. Farley,
K. H. Williford, and A. Chen, “The science process
for selecting the landing site for the 2020 mars rover,
Planetary and Space Science, 2018.
[25] A. Saltelli and S. Tarantola, “On the Relative Impor-
tance of Input Factors in Mathematical Models,Jour-
nal of the American Statistical Association, 2002.
BIOGRAPHY[
Samalis Santini received her B.S in Me-
chanical Engineering from the Univer-
sity of Puerto Rico at Mayaguez in 2016
and her M.S. in Aerospace Engineering
at Cornell University in 2018. She is
currently a Ph.D. graduate student at
Texas AM University in the Aerospace
Engineering department and is a part of
the Systems Engineering, Architecture,
and Knowledge (SEAK) Lab. Her inter-
ests are the application of statistical learning and knowledge
extraction techniques for architecture analysis of Entry, De-
scent, and Landing (EDL) Systems.
Daniel Selva [scale=1] is an Assistant
Professor of Aerospace Engineering at
Texas A&M University, where he directs
the Systems Engineering, Architecture,
and Knowledge (SEAK) Lab. His re-
search interests focus on the applica-
tion of knowledge engineering, global
optimization and machine learning tech-
niques to systems engineering and ar-
chitecture, with a strong focus on space
systems. Before doing his PhD at MIT, Daniel worked
for four years in Kourou (French Guiana) as an avionics
specialist within the Ariane 5 Launch team. Daniel has
a dual background in electrical engineering and aerospace
engineering, with degrees from MIT, Universitat Politecnica
de Catalunya in Barcelona, Spain, and Supaero in Toulouse,
France. He is a member of the AIAA Intelligent Systems
Technical Committee, and of the European Space Agency’s
Advisory Committee for Earth Observation.
David Way is an Aerospace Engineer in
the Atmospheric Flight and Entry Sys-
tems Branch at the NASA Langley Re-
search Center. His area of expertise
is flight mechanics and modeling and
simulation of planetary entry systems.
Dr. Way has a Ph.D. and M.S. in
Aerospace Engineering from the Geor-
gia Institute of Technology and a B.S. in
Aerospace Engineering from the United
States Naval Academy, Annapolis.
12
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The process of identifying the landing site for NASA's Mars 2020 rover began in 2013 by defining threshold mission science criteria related to seeking signs of ancient habitable conditions, searching for biosignatures of past microbial life, assembling a returnable cache of samples for possible future return to Earth, and collecting data for planning eventual human missions to the surface of Mars. Mission engineering constraints on elevation and latitude were used to identify candidate landing sites that addressed the scientific objectives of the mission. However, for the first time these constraints did not have a major influence on the viability of candidate sites and, with the new entry, descent, and landing capabilities included in the baseline mission, the vast majority of sites were evaluated and down-selected on the basis of science merit. More than 30 candidate sites with likely acceptable surface and atmospheric conditions were considered at a series of open workshops in the years leading up to the launch. During that period, iteration between engineering constraints and the evolving relative science potential of candidate sites led to the identification of three final candidate sites: Jezero crater (18.4386°N, 77.5031°E), northeast (NE) Syrtis (17.8899°N,77.1599°E) and Columbia Hills (14.5478°S, 175.6255°E). The final landing site will be selected by NASA's Associate Administrator for the Science Mission Directorate. This paper serves as a record of landing site selection activities related primarily to science, an inventory of the number and variety of sites proposed, and a summary of the science potential of the highest-ranking sites.
Article
Full-text available
Discussed in this paper is a quite unique and novel intelligence decision technology resting upon three systems we have called Disciple-LTA (Learning, Teaching and Assistance), TIACRITIS (Training Intelligence Analysts Critical Reasoning Skills), and Disciple-CD (Connecting the Dots). We have so far applied these systems to complex intelligence inferences based on masses of evidence of many different kinds and coming from many different sources. This paper discusses the extension of these systems to be valuable decision support assistants that are capable of helping analysts to answer the two fundamental questions regarding decisions made in the face of uncertainty: what's at stake?, and what are the odds? The stakes question concerns the value or utility of decision consequences and the odds question concerns the probability of these possible consequences. We discuss the requisite ingredients of defensible and persuasive decisions and problems associated with the discovery of these ingredients in a world that keeps changing all the time. But we also consider the constraints facing intelligence analysts who so often have limited time for decisions and who also have deficiencies regarding the availability of information supporting requisite value and probability judgments. Conventional approaches to decision analysis are usually not helpful in the face of these constraints. We offer simplified methods for assessing both value and probability judgments and a simplified method for combining these judgments in the selection of a course of action that does take account of the requisites for defensible and persuasive decision and analysis. In the process, we illustrate our methods with a very complex analysis involving the possible proliferation of nuclear weapons.
Article
Full-text available
The process of identifying the landing site for NASA’s 2011 Mars Science Laboratory (MSL) began in 2005 by defining science objectives, related to evaluating the potential habitability of a location on Mars, and engineering parameters, such as elevation, latitude, winds, and rock abundance, to determine acceptable surface and atmospheric characteristics. Nearly 60 candidate sites were considered at a series of open workshops in the years leading up to the launch. During that period, iteration between evolving engineering constraints and the relative science potential of candidate sites led to consensus on four final sites. The final site will be selected in the Spring of 2011 by NASA’s Associate Administrator for the Science Mission Directorate. This paper serves as a record of landing site selection activities related primarily to science, an inventory of the number and variety of sites proposed, and a summary of the science potential of the highest ranking sites.
Article
On August 5, 2012, the Mars Science Laboratory rover, Curiosity, successfully landed inside Gale Crater. This landing was the seventh successful landing and fourth rover to be delivered to Mars. Weighing nearly one metric ton, Curiosity is the largest and most complex rover ever sent to investigate another planet. Safely landing such a large payload required an innovative Entry, Descent, and Landing system, which included the first guided entry at Mars, the largest supersonic parachute ever flown at Mars, and a novel Sky Crane landing system. A complete, end-to-end, six degree-of-freedom, multi-body computer simulation of the Mars Science Laboratory Entry, Descent, and Landing sequence was developed at the NASA Langley Research Center. In-flight data gathered during the successful landing is compared to pre-flight statistical distributions, predicted by the simulation. These comparisons provide insight into both the accuracy of the simulation and the overall performance of the vehicle.
Article
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Conference Paper
This paper details the design architecture, design methodology, and the advantages of the SpaceCube v2.0 high performance data processing system for space applications. The purpose in building the SpaceCube v2.0 system is to create a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. The SpaceCube v2.0 system leverages seven years of board design, avionics systems design, and space flight application experiences. This paper shows how SpaceCube v2.0 solves the increasing computing demands of space data processing applications that cannot be attained with a standalone processor approach.
Article
This paper presents the concept of cognitive assistant systems which represents an approach to ensure the highest degree possible of situation awareness of the flight crew as well as a satisfactory workload level. This concept offers the solution to counteract susceptibility to pilot errors typical of lack of attention or other cognitive limitations. It is founded on cognitive system engineering. This technology enables a cockpit design in order to systematically comply with the requirements of ‘Human-Centred Automation (HCA)’. The underlying approach behind the concept has become real with the development of the cockpit assistant system prototype family CASSY/CAMA as described in this paper. CASSY/CAMA has been extensively tested in a flight simulator and successfully field tested with the ATTAS (Advanced Technologies Testing Aircraft System) of the DLR. Some of the test results with CAMA will be presented in this paper.
Article
The Mars Smart Lander (MSL, renamed and redefined as the Mars Science Laboratory) will provide scientists with access to previously unachievable landing sites by providing precision landing to less than 10 km of a target landing site with landing altitude capability to 2.5 km above the Mars Orbiter Laser Altimeter geoid. Precision landing is achieved by using the aerodynamic forces on the entry body to aeromaneuver through the Martian atmosphere during the entry phase of flight. The entry body is designed to provide aerodynamic lift. The direction of the aerodynamic lift vector, defined by the vehicle bank angle, is commanded by the onboard entry guidance, to converge downrange and crossrange errors by parachute deploy, while meeting the parachute deploy constraints. Several approaches and entry body configurations for providing aerodynamic lift can be considered, including axisymmetric capsule configurations with offset c.g.s using ballast or packaging, aerodynamically shaped capsule-type configurations, and alternate configurations such as mid-lift-to-drag-ratio vehicles. The design considerations, entry configurations, and entry performance of the Mars Smart Lander are described.