Content uploaded by David G. Ullman
Author content
All content in this area was uploaded by David G. Ullman on May 10, 2017
Content may be subject to copyright.
Proceedings of IMECE2005
2005 ASME International Mechanical Engineering Congress and Exposition
November 5-11, 2005, Orlando, Florida USA
IMECE2005-81690
AN INFORMATION-EXCHANGE TOOL FOR CAPTURING AND COMMUNICATING DECISIONS
DURING EARLY-PHASE DESIGN AND CONCEPT EVALUATION
Irem Y. Tumer, Ph.D.
Ali Farhang Mehr, Ph.D.
Francesca Barrientos, Ph.D.
NASA Ames Research Center
Moffett Field, CA 94035
{itumer, amehr, fbarrientos}@email.arc.nasa.gov
David Ullman, Ph.D.
Robust Decisions, Inc.
ullman@robustdecisions.com
ABSTRACT
Capturing and communicating risk and uncertainty for
NASA’s low-volume high-cost exploration missions have
become the subject of intensive research in the past few years.
As a result, a variety of quantitative and qualitative
methodologies were developed, some of which have been
adopted and implemented by various NASA centers in the form
of risk management tools, procedures, or guidelines. Most of
these methodologies, however, aim at the later stages of the
design process or during the operational phase of the mission
and therefore, are not applicable to the earlier stages of design.
In practice, however, uncertainties in the decisions made during
the early stages of design introduce a significant amount of risk
to the concepts that are being evaluated. In this paper, we aim
to capture and quantify uncertainty and risk due to the lack of
knowledge as well as those associated with potential system
failures. We present an information exchange tool (X-Change)
that enables various subsystem designers to capture, quantify,
and communicate the uncertainties due to their lack of
knowledge as well as those due to failures that might not be
readily available or easily-quantifiable. A key piece in this
work is to incorporate risk and uncertainty due to the lack of
knowledge during the early design phase, and combining it
with the potential failure modes. The challenges we face in
accomplishing this goal are: 1) lack of a unified ontology
defining risk, uncertainty and failure in order to enable their use
on common grounds; 2) difficulty in expressing and capturing
risk and uncertainty due to the designers’ lack of knowledge at
the early stages of design; 3) difficulty in accounting for
potential failure modes and their associated risks at the
functional design level, before a form or solution has been
determined. In order to address these challenges, this paper first
attempts to provide a definition for risk and uncertainty. Then,
we present the results of an ongoing effort to develop a risk-
based design tool for the concurrent mission design
environment at NASA. We propose a framework that enables
multiple subsystems capture and communicate the relevant risk
and uncertainty in their decisions. The application of the
proposed framework is further elaborated using a satellite
design example.
1. PROLOGUE
Today’s most complex engineering systems are often
designed by multidisciplinary teams of experts in a concurrent
(and sometimes distributed) fashion. In the context of space
exploration missions, in particular, the efficiency of such
concurrent engineering team can greatly impact the time and
costs associated with the iterative design, evaluation, and risk
analysis. There are several real time concurrent design teams at
the various NASA centers that produce conceptual designs of
space missions for the purpose of analyzing the feasibility of
the overall mission, estimating the associated costs, and
documenting system/subsystem design requirements (for a
detailed description of NASA’s design and development
processes, see for example: Chao et al 2004 and 2005). Despite
the indispensable role that these concurrent engineering teams
play in designing today’s space exploration missions, there are
still a few major challenges that are left untreated:
Developing a Sufficient Understanding of Risks and
Uncertainties: The risk elements associated with
decisions that are made at both system and subsystem
levels are not adequately captured and described due to the
lack of a standardized approach or unified risk ontology.
Most conventional risk analysis approaches are not suitable
for the highly interactive and rapidly evolving concurrent
design environments, where the models are vague,
decisions are distributed and otherwise difficult to capture
and probabilities are difficult to assign. Studies and design
reviews have pointed to the early design stages as one of
the best opportunities to catch potential failures and
anomalies (e.g. Mahadevan and Smith 2003, Tumer and
Stone 2005). Current state-of-the-art routines, however, do
not provide a solid rigor to effectively capture, collect, and
combine risk elements and the corresponding mitigating
factors from all subsystem experts and system-level
managers.
Exploring Large Design Spaces (Tradeoff Study): In
most concurrent engineering teams, the designers must
rapidly find a feasible conceptual design for a space
mission that satisfies the given requirements. Given the
limited time scale and the lack of a robust platform on
which system-level decision makers can combine multiple
alternatives from various subsystems to explore a full
1 Copyright © 2005 by ASME
range of design options, the final result is often a single
“point design”. Although this point design may satisfy the
mission requirements, it is generally not “optimal”. This is
partially due to the fact that high fidelity models exist
mostly at the subsystem level, and only one possible
solution to each individual sub-problem is communicated
to other subsystems or to the higher-level managers.
Building Team Consensus to Converge to the Most
Desirable Solution: Due to the numerous dependencies
that exist between the various subsystems in a spacecraft,
and the speed with which the engineers make design
decisions, the subsystem engineers are sometimes unaware
of the important design choices of others. In the absence
of a team consensus building strategy, very often the final
design is influenced by the decision of the very few most
influential team members. The only way to keep the
engineers informed about the design options under
consideration is by informing them about the decisions
related to them dynamically (i.e., live information feed)
and providing the system-level managers with a decision-
making tool that guides the design process in the direction
of overall consensus given inputs from all subsystem
experts.
An ongoing work at NASA Ames Research Center
(ARC) and Robust Decisions Inc. (RDI) aims to address these
challenges by means of an information exchange tool, called X-
Change, that enables various subsystem experts as well as
system-level managers to capture, quantify, and communicate
their design decisions and the associated risks and uncertainties
(whether easily-quantifiable or just based on the expert
opinion). This work is intended to provide a communication
platform that can be used by concurrent engineering teams to
devise a more efficient and robust team decision making
process, account for risk and uncertainty in multiple levels of
design fidelity, and work towards exploring the design space
for the most desirable solution. The potential improvements to
the performance of the concurrent engineering teams in the
early phases of designing space exploration missions, while
relatively inexpensive to achieve, can have an enormously
positive impact downstream on the risk and cost of the
implementation and operation phases.
The organization of the rest of this paper is as follows:
Sections 2 and 3 further elaborate the proposed methodology of
this paper. In Section 4, we present the result of a design case
study using the proposed framework. This case study involves
designing a satellite, referred to as KatySat, by a team of
designers and how X-Change can be used to provide a platform
for capturing and communicating decision, risk elements, and
the mitigating factors. Finally, Section 6 is dedicated to the
concluding remarks of this paper.
2. RISK AND UNCERTAINTY IN EARLY DESIGN
Researchers have developed a wide variety of risk and
uncertainty identification methods over the past few decades
(for reviews of such methods, see for instance, Zang et al. 2002,
Backman 2000, Choi 2001, Du and Chen 2002, Smith and
Mahadevan 2003). In particular, failure analysis tools have
been widely used by NASA in evaluating the safety of
aerospace systems (examples can be seen in NASA’s risk
management guidelines). Analysis results identify how the
likelihood of failure might be reduced through design changes
(See Greenfield 2000 for a review of risk analysis techniques at
NASA). The bulk of these techniques, however, fall short in
the early-stage concurrent engineering environments where
team members are involved in making rapid design decisions in
a hierarchical and multi-subsystem fashion. As such, this
research aims to provide a unified platform for making risk-
informed design decisions in NASA’s concurrent engineering
teams as risk elements are identified by the experts and
communicated in a structured manner.
NASA guidelines describe the term risk in terms of the
likelihood and consequence of an incident that could prevent a
mission or mission system from meeting its objectives. In
slightly different words:
Definition: Risk is often defined as a triplet of 3 elements: A
scenario, the probability of that scenario, and the corresponding
consequence. Risk usually has a negative connotation but can
also be a measure of opportunity (i.e., the consequence is
good). Example: If the probability of a certain control system
shutdown due to thermal fatigue is 0.2% and the consequence is
Loss Of Crew (LOC), then the risk of such an event can be
described as a 3-tuple: (control system shutdown, 0.2%, LOC).
The term uncertainty, on the other hand, is more general
and is a characteristic of the stochastic process itself (rather
than its outcome, i.e., event):
Definition: Uncertainty is a characteristic of a stochastic
process that describes the dispersion of its outcome (i.e., event)
over a certain domain (e.g., over time, space, different failure
modes, quantity etc). Example: Thermal fatigue is a stochastic
process that may incur 2 different failure modes in the above-
mentioned control system: 1) complete shutdown (which results
in LOC), 2) partially functioning (which results in Loss Of
Mission, i.e., LOM, but does not create an LOC). Then the set
of possible events (failure modes) along with their probabilities
show the uncertainty associated with thermal fatigue at that
certain control system: {(complete shutdown, 0.2%, LOC),
(partially functioning, 0.5%, LOM)}. Note that in this
particular example, there are two discrete outcomes. The
outcome of a stochastic process may also be continuous, in
which case the associated uncertainty is described as a
continuous probability distribution. The sources of uncertainty
may vary and can be epistemic or aleatory (i.e. due to lack of
knowledge, or due to physical variations in the system).
In this work, we classify uncertainty and the associated
risks in 3 different types, and introduce ways to mitigate the
them throughout lifecycle of a space mission. Table 1 presents
the four different types of uncertainty and the mitigation
method using the proposed X-Change tool:
Table 1. Types of Uncertainty and Mitigation Approach
Type/Source of
Uncertainty
Analysis Method
in X-CHANGE
Mitigation Method in
X-Change
Design Uncertainty
Source: The
stochastic nature of
the design process
(human nature)
A Bayesian
method is
implemented that
fuses experts’
opinions to
calculate the risk
breakdown
X-CHANGE uses
Bayesian Team
Support to analyze
alternative designs. It
calculates satisfaction
and risk metrics. It also
employs an expert
system to suggest areas
of further exploration
to reduce design
uncertainty and risk.
System Uncertainty
Source: Potential
failure modes in the
system that either
have been identified
in the current design
(i.e. a known
functional failure in
Resources (e.g.
time, money) are
re-distributed to
minimize the total
risk of the system
(i.e. resources are
allocated to
reduce overall
risk and
From a knowledgebase
of functional failures in
similar systems, X-
CHANGE analyzes the
expected risk premium
of each failure mode
which can then be used
by designers to mitigate
those risks.
2 Copyright © 2005 by ASME
a physical device) or
are yet unknown.
uncertainty).
Variation
Source: Deviation
from the design due
to the
implementation
process (what comes
out of the
implementation
phase may not be
exactly what the
designers expected
originally).
N/A This type of uncertainty
pertains to a later stage
of the mission lifecycle
and is not handled by
X-CHANGE (which is
a design tool)
Design uncertainty refers to a lack of knowledge about the
product or process. Frequentist methods standard used to
model risk do not support design uncertainty as it requires a
look into the evolving future rather than based on past statistics.
Bayesian methods can be used to model this unknown future.
Risk is not usually associated with design uncertainty.
However, until a project or process is in use (design and test are
complete), it is the dominating type of uncertainty. System
uncertainty refers to physical characteristics of the system due
to laws of physics (e.g. failure due to fatigue). This type of
uncertainty may be difficult to model early in the design
process; however, as the design process progresses and high-
fidelity models and simulations become available, a better
prediction of this class of risks is possible. Finally, variation is
the natural deviation from the expected value and is the lower
limit of uncertainty for a given product or process. The only
way you can determine variation is by building the product or
running the process repeatedly to find how it varies in time or
other basis. Variation of products/ processes can be under
statistical control. Variation is normally measured and managed
using frequentist probability methods.
Figure 1 shows the timeline of a space mission design
process and how various types of uncertainty may impact the
final design. Early in the design process, when modeling is
entirely based on low fidelity simulations, virtually all the
uncertainty is design uncertainty. As the configurations mature
and specific systems solidify, system uncertainty and variation
can be more accurately modeled.
Figure 1. The impact of various types of uncertainty throughout
the lifecycle of a space mission.
The proposed approach captures risk and uncertainty from
two sources: from engineers and technologists during their
design process, and from a knowledgebase of functional
failures. We integrate them to provide a comprehensive
understanding of the overall system risks, as described in the
following.
Risk and Uncertainty Analysis based on the Designer’s
Knowledge and Design Constraints: A Bayesian approach
has been developed to address design uncertainties based on the
expert opinion as well as limited quantitative data that may be
available to the designer (Ullman 1997, Orr 2001). While most
design decisions in the current design efforts at NASA are
made as if information were deterministic, spacecraft designers
realize that it is not, especially early in the design process. In
fact, NASA experts have a number of ways of indirectly
referring such uncertainties. For example, the subsystem
designers refer to a technology’s maturity by noting its
Technology Readiness Level (TRL). The TRL describes the
uncertainty in successful realization of the proposed technology
as a numerical value between 1 to 9 (The lower the TRL, the
higher the uncertainty in any evaluation associated with the
system). Assigning TRL values is not well documented, but is
generally understood. Uncertainty in human knowledge is
managed in the proposed approach for both parameter targets
and parameter values resulting from evaluations.
Bayesian Team Support, hereafter BTS, is a decision
support methodology to manage the mix of qualitative and
quantitative uncertainty, evolving conflicting information that
characterizes early design. For a single issue, its current
instantiation, referred to as Accord (For more information, see
www.robustdecisions.com), can fuse a combination of
simulation results and human opinions to give a window in
which concepts are most likely to succeed. It also provides an
estimation of design risks, and is equipped with an expert
system that determines where additional work will help ensure
that the best possible decisions are being made (Ullman 1997).
Functional-Failure Based Risk and Uncertainty
Analysis Tool: A risk analysis approach is under development
that utilizes a knowledge base of previous failures to identify
potential failure modes and the corresponding risks and
mitigation factors. The approach, hereafter referred to as Risk
and Uncertainty Based Integrated Concurrent Design (or
RUBIC-Design), provides a solid rigor for using functional
failure data to guide the design process throughout the design
cycle. RUBIC is based on the functional model of space
exploration systems (or any other mission-critical engineering
system for that matter) and has the capability of adjusting in
real-time as the overall system evolves throughout the design
process (for details of RUBIC methodology, see Mehr and
Tumer 2005).
3. THE OVERALL METHODOLOGY: X-CHANGE
X-Change is a system-of-systems (SoS) trade study
manager based on BTS and RUBIC to help a team choose the
most risk aware, robust, spirally supportive and reusable
configuration early in the design process. The goal is for this X-
Change to add little burden to the team and return to them real-
time design process guidance1. X-Change is based on the
combination of three methodologies; ICEMaker, BTS/ Accord
(Ullman 1997), and RUBIC design (Mehr and Tumer 2005).
ICEMaker© is a network-based design and decision parameter
storage and transfer system used by NASA’s Team-X mission
design team in over 600 System-of-systems design studies. It is
built around Excel worksheets located within workbooks, one
for each sub-system. Parameter values determined in one sub-
system (workbook) can be shared with the system or other sub-
systems. ICEMaker provides a distributed work environment
1 The name “X-Change” is intended to denote the “exchange” of information necessary during trade studies, the desire to support extreme collaboration teams
(e.g. Team X), and to facilitate “change” during the trade studies. .
3 Copyright © 2005 by ASME
that supports parameter trade studies. It is limited to the team
being linked to a single server and can only address a single
alternative at any time (the database is flat, a set of parameter
values on a worksheet). Further, it has no mechanism for
supporting uncertainty and risk, and can not support the
decision-making process. X-Change, on the other hand, extends
the SoS support of ICEMaker and adds the decision and risk
support capabilites from Accord and RUBIC. As shown in the
Figure 2, X-Change allows the team to model an SoS as a
recursive set of systems and sub-systems. For each system node
there can be many different alternatives that are being
considered. Each of these alternatives is characterized by
whatever uncertain values exist for the parameters used to
describe the system. These values are based on simulations of
varying fidelity and expert knowledge.
One activity is to choose amongst the alternatives at each
system node, based on the uncertain, incomplete and evolving
information. X-Change can then help the experts responsible
for this system to become aware of the risks associated with
their choioce. This will help make decisions about
configurations of alternatives for SoS further up in the system
tree. Often these decisions require tradeoffs between systems
to find the best possible alternatives and their risks. In an ideal
world, all of this could be represented by analytical models of
the systems and their interactions. Then, optimization methods
could be used to find the best configurations. The recursive
nature of X-Change allows the team to build the system of
systems as the ideas mature either bottom-up or top-down or
both. Once the SoS structure is built in X-Change and the
knowledge uncertainties are captured from all responsible
designers and system-level managers, RUBIC is used as an
auxilary risk analysis tool to interpret the overall risk of the
system based on a database of historical failures. As shown in
Figure 3, the RUBIC-generated risk analysis result is then
treated as an additional ‘expert’ in X-Change and fused with
the opinion of other human experts.
In the following section, a case study is used to
demonstrate the application of X-Change in a concurrent
engineering environment.
4. CASE STUDY: DESIGN OF KATYSAT
To understand and then demonstrate how design decision-
making could be improved through using X-Change, we
collected data from an actual design team, analyzed the design
decisions encountered, and recast some of those decisions in X-
Change’s framework. For our study, we followed a project
team in a Stanford University graduate student spacecraft
design class. The class project provides us with a sufficiently
complex though still manageable design problem that reflects
the types of problems that would show up in any concurrent
engineering design problem. This project team is particularly
appropriate because the students learn a process that has many
similarities with the NASA design process, including the
engineering disciplines required, the design process followed,
and the requirements to build a system that actually flies. In this
section, we describe the design project, how we collect and
analyzed data, and how we would present the same (or similar)
design problem in X-Change.
4.1 Satellite design problem
In the Stanford class, project teams develop and build the
hardware and software for a self-conceived mission concept.
Some of the students are engineering graduate students, but
most are professional engineers from a local aerospace
company. For four months, we followed the progress of one of
the teams as they developed a satellite system called KatySat.
KatySat’s mission is to provide a satellite system that can be
operated by high school students at locations around the world.
The high school student “users” use the internet to
communicate with the satellite via a Stanford University
ground station, or more directly using ham band radios and
handheld antennas at their school. Communications include
sending commands to the satellite, receiving satellite telemetry,
and uploading and downloading multimedia files which
constitute the satellite’s virtual payload.
The nine-person design team has nine months to design
and build an entire mission system, including the satellite
hardware and software, the mission operations architecture and
user software applications. The satellite itself is based on a
Cubesat standard, a type of picosatellite, weighing less than one
kilogram and designed to fit in a cubic structure that is 10 cm
on a side. (For a description of picsatellite standards see (Heidt
et al 2000)). Figure 4 shows a typical cubesat design from
another project. The structure houses an onboard computer, a
power distribution system, batteries, several radios, a passive
4 Copyright © 2005 by ASME
Part of Global
Constraints
System
C1 = A11 + A12
C2 = A11 + A22
etc
Customer/Global and
Local Constraints
Sub-System 1
A11
A12
etc
Sub-System 2
A21
A22
etc
Part of Global
Constraints
Coupled Constraints
Customer and
sub-system local
constraints
A Configuration (C) is
made up of
Alternatives (A).
Figure 2: The proposed framework is hierarchical and recursive.
The expert opinions from subsystems are
fused with functional-failure analysis data
retrieved from a knowledgebase
Subsystem
Expert #1
Subsystem
Expert #n
……
X-Change captures expert opinion
Failure
Knowledgebase
RUBIC Methodology
Concurrent Engineering Team
Figure 3: Fusing data in X-Change (RUBIC is treated as another
expert member of the team)
attitude determination system, health and environmental
sensors and a camera. Solar arrays mounted to the outside will
power the system. KatySat will communicate with ground
stations through two different links: high bandwidth links to a
6m satellite dish located at Stanford University, and low
bandwidth links with high school users to hand held Yagi
antennas. Not merely a paper design, this satellite will launch
on a Dnepr launch system and is expected to operate for several
years.
Figure 4: Cubesat (CalPoly CP-1)
The KatySat team decomposed the system into multiple
subsystems, and one team member lead the design for each
subsystem. The main satellite hardware subsystems are the
Power system, the Command and Data Handling (CDH)
system, the Payload Communications (Comm) system, and the
Systems Integrations (SI) subsystem. The SI subsystem is a
catchall of design and analysis tasks including structural design
and attitude determination, as well as orbit, thermal and
environmental analysis. The software-based subsystems are the
Ground Systems Architecture (which includes the mission
operations and payload user applications) and the System
Status and Health Analysis System tools.
4.2 Methodology and information captured
During the study we visited and observed class sessions
either weekly or biweekly. Our sources included informal notes
taken during observations, interviews with the students and
teacher, design documents and design review presentations,
project blogs and websites and selected emails. In this section,
we demonstrate the application of X-Change methodology
using this benchmark design problem.
Our objective when collecting data was to capture the
types of information that could be of benefit to the designers’
decision-making process. In particular, we looked for
subsystem design decision points in which the designers were
considering a set of competing alternatives. For each decision
point, we recorded the decision context, results, and how the
decisions would affect other subsystems. We divided this
information into a number of decision elements, as summarized
in Table 2.
Table 2. Decision elements recorded for each design step.
Decision Element Description
Design problem A textual description of the design context
under which the decision is made.
Alternatives A list of the alternatives under consideration.
Criteria A list of t parameters used as criteria to select
the best alternative. The information we
collected included estimated values and relative
importance. For instance, if complexity is a
criterion, then we also recorded the estimated
complexity of each alternative on a numeric
scale, and the importance that complexity had
relative to other criteria.
Criteria target The target values for the criteria.
Design parameters
affected
A list of design parameters, eg. radio power
consumption rate, that would be affected by this
decision.
Subsystem
interactions
A list of the subsystems that would be affected
by the decision outcome.
Outcome The alternative that was finally selected.
Because we observed the KatySat project from the early
conceptual design phase through the detailed design and testing
phase, we were able to capture a variety of decision points,
ranging from the choice of systems architecture, to the selection
of components.
Periodically, we reviewed that data and attempted to
understand how X-Change could be applied to KatySat’s
decision problems. As is to be expected, we were not always
able to capture all of the information associated with the
systems and subsystems designs because the project we studied
was ongoing, the process was informal, and not all decisions
had clear cut outcomes. Further, the engineers themselves could
not necessarily explain all of the rationales that went into their
decisions nor all of the design parameters that their decisions
influenced. In order to formalize the design problems so that
they could be handled by X-Change, we used our own expert
knowledge in engineering, the design process and decision-
making to infer the missing information.
4.3 KatySat problem presented in X-Change
In this section, we show how we recast one of the Payload
Communications system design problems as an X-Change
problem. There are many alternative configurations for the
Communication system and each configuration must include
the following components: a high rate radio uplink/downlink
(HRR), a low rate radio uplink/downlink (LRR) and a terminal
node controller (TNC). Further, each of these components has
several alternatives. Figure 5 shows the worksheet for one
LRR alternative, the “Stensat”. Parameters that describe
important selection criteria for and the performance of LRRs
are listed along with their units. For measurable parameters,
units are given in the traditional manner and for qualitative
parameters they are either “Y/N’ or “1-5”. For each parameter,
there are locations for evaluation results and goals.
5 Copyright © 2005 by ASME
Figure 5: Example screenshot for one alternative of the low rate
radio uplink/downlink (LRR)
In Figure 5, the “goals” are the target values for the
parameters that act as constraints on the system. For the goals,
two values can be noted, the specific target (the traditional goal
state) and an unacceptable value (which is optional). Although
best practices encourage defining a single target, reality shows
that sometimes these need to be compromised to actually find a
satisfactory solution. By defining two goal values up front, we
can model the uncertainty in what is actually acceptable as a
simple two-point linear utility curve. For example, any cost
less than $250 has full utility, any cost over $1000 has no utility
and any value between has a proportional utility. This higher
value includes the “reserve”.
Where engineers traditionally develop single evaluation
values, X-Change allows them to also enter their evaluation
uncertainty. Specifically, for Stensat, the cost is estimated at
$500, but this is early in the process and it may cost as much as
$600 or as little as $400. X-Change treats these as three points
on a beta distribution (mean and + 3 sigma). This information
supports decision and risk management. For the qualitative
information, the input is simpler as shown in the example.
Qualitative information is entered as either Y/N or on a five
point Likert scale. Besides Stensat (Shown in Figure 5), there
are three other alternatives for the LRR. Each of these has a
similar worksheet, a copy of Stensat’s worksheet except for
unique estimation values. The LRR expert could choose the
best one, but the LRR interacts with the HRR and the TNC and
if the experts for these sub-systems also chose their “best” in
isolation, the Communication system may be much less than
optimum. In fact, with 4 LRR alternatives, 2 HRR alternatives
and 2 TNC alternatives, there are 16 (4x2x2) potential
configurations. However, only a fraction of these are
physically realizable and only a subset of these worth serious
consideration.
In X-Change, configurations are built by defining a new
worksheet and “adopting” a set of sub-systems (one LRR, one
HRR and one TNC.) The parameters of these component parts
are propagated onto the new configuration worksheet and can
be edited by the Communications manager. Any number of
configurations can be built in this manner. All of the
information put into X-Change is stored in a database housed
on a server. The Bayesian Team Support analytical engine then
helps the team manage tradeoffs and choose the best
configuration in light of the uncertainty and risk.
At any level in the system (e.g. LRR, Communication,
Total KatySat) the alternatives can be evaluated to find out how
well they satisfy the goals and measure the risk in selecting
them. This is accomplished by using the information on the
spreadsheets as input to the Bayesian Team Support analytical
engine. For example, three configurations developed for the
KatySat communications system are: Config 1 = LRR Sten +
TNC SW + HRR 1; Config 2 = LRR Sten + TNC HW + HRR
1; and Config 2 = LRR SbaraSat + TNC SW + HRR 2. For
each there are many parameters, both quantitative and
qualitative, that characterize the configuration. These enable an
estimation of their relative satisfaction and risk.
With this design example, we show how a small but very
typical design example could take advantage of X-Change’s
capabilities. Whereas most concurrent engineering teams only
come to a single design solution for a system design problem,
X-Change allows engineering teams to consider multiple
configurations and multiple levels in the subsystem hierarchy,
and to rapidly communicate the outcome of decisions. Such a
formal tool might be too “heavy-weight” for a small design
team such as KatySat, but in full-scale, risky NASA missions,
the formalization of design and more complete understanding
of design uncertainty in necessary to produce more robust
designs.
5. EPILOGUE
In this paper, we developed a new framework for handling
risk and uncertainty in the earlier phases of the design process
when decisions are still being made and reversed at a rapid
pace. The proposed framework, referred to as X-Change, is
currently in the final stages of development, and when mature,
will be targeted to be used by NASA’s concurrent engineering
teams during conceptual mission design phase. We first
provided an ontology for risk and uncertainty during various
stages of the design process. Using the KatySat satellite design
example, we showed a small-scale application of the proposed
approach in a concurrent engineering environment and
demonstrated its ability to capture and communicate risk and
uncertainty in all levels (sub-system, system, system-of-system,
etc.). The proposed decision-making tools will then use this
information to guide the design process and help the engineers
make risk-informed decisions.
REFERENCES
Backman, B., (2000). “Design Innovation and Risk
Management: A Structural Designer's Voyage into
Uncertainty,” ICASE Series on Risk-based Design, November
2000.
Choi, K K.., (2001), “Advances in Reliability-Based
Design Optimization and Probability Analysis - PART II”,
ICASE Series on Risk-based Design, December 2001.
Chao, L.P., Tumer, I.Y., Ishii, K., (2004), “Design process
error proofing: Engineering peer review lessons from NASA.”
ASME Design for Manufacturing Conference/IDETC 2004.
Salt Lake City, UT. September 2004.
Chao, L.P., Tumer, I.Y., Ishii, K. (2005), “Design process
error proofing: Benchmarking of the NASA development
cycle.” IEEE Aerospace Conference. Big Sky, MN. March
2005.
Du, X. and Chen, W., (2002). "Efficient Uncertainty
Analysis Methods for Multidisciplinary Robust Design", AIAA
Journal, 40(3), 545-552, 2002.
Greenfield, M. A. (2000). NASA's Use of Quantitative
Risk Assessment for Safety Upgrades. Proceedings of the IAA
Symposium, Rio de Janeiro, Brazil, Univelt, Inc.
Heidt, H., Puig-Suari, Y, Moore, A. S., Nakasuka, S, and
Twiggs, R.J. (2000). “CubeSat: A new Generation of
Picosatellite for Education and Industry Low-Cost Space
Experimentation,” Proceedings of the 14th AIAA/USU Conf.
on Small Satellites, Logan, Utah, Paper number SSC00-V-5.
Mahadevan, S., Smith, L., (2003), “System Risk
Assessment and Allocation in Conceptual Design”, NASA/CR,
May 2003.
6 Copyright © 2005 by ASME
Mehr, F. A., Tumer, I., "A New Approach to Probabilistic
Risk Analysis in Concurrent and Distributed Design of
Aerospace Systems", Proceedings of ASME International
Design Engineering Technical (DETC), Design Automation
Conference (DAC) Long Beach, CA, 2005
Orr, J., “Accord: Secret Weapon for Engineering
Professionals”, Engineering Automation Report, January 2001.
Stone, R.B., Tumer, I.Y., VanWie, M. “The function-
failure design method.” Journal of Mechanical Design. 2005.
Ullman, D., "What to do Next: Letting the Problem Status
Determine the Course of Action”, Research in Engineering
Design, 1997 (9), pp 214-227.
Zang, T.A., Michael J. Hemsch, Mark W. Hilburger, Sean
P. Kenny, James M. Luckring, Peiman Maghami, Sharon L.
Padula, W. Jefferson Stroud, (2002), “Needs and Opportunities
for Risk-Based Multidisciplinary Design Technologies for
Vehicles”, NASA TM, July 2002.
7 Copyright © 2005 by ASME