ArticlePDF Available

A Generative Human-in-the-Loop Approach for Conceptual Design Exploration Using Flow Failure Frequency in Functional Models

Article

A Generative Human-in-the-Loop Approach for Conceptual Design Exploration Using Flow Failure Frequency in Functional Models

Abstract and Figures

A challenge systems engineers and designers face when applying system failure risk assessment methods such as probabilistic risk assessment (PRA) during conceptual design is their reliance on historical data and behavioral models. This paper presents a framework for exploring a space of functional models using graph rewriting rules and a qualitative failure simulation framework that presents information in an intuitive manner for human-in-the-loop decision-making and human-guided design. An example is presented wherein a functional model of an electrical power system testbed is iteratively perturbed to generate alternatives. The alternative functional models suggest different approaches to mitigating an emergent system failure vulnerability in the electrical power system's heat extraction capability. A preferred functional model configuration that has a desirable failure flow distribution can then be identified. The method presented here helps systems designers to better understand where failures propagate through systems and guides modification of systems functional models to adjust the way in which systems fail to have more desirable characteristics.
Content may be subject to copyright.
Ryan M. Arlitt
2
SUTD-MIT International Design Centre,
Singapore University of Technology and Design,
Singapore 487372
e-mails: arlitt.ryan@gmail.com;
rmarl@mek.dtu.dk
Douglas L. Van Bossuyt
3
Department of Systems Engineering,
Naval Postgraduate School,
Monterey, CA 93940
e-mail: douglas.vanbossuyt@nps.edu
A Generative Human-in-the-Loop
Approach for Conceptual Design
Exploration Using Flow Failure
Frequency in Functional Models
1
A challenge systems engineers and designers face when applying system failure risk
assessment methods such as probabilistic risk assessment (PRA) during conceptual
design is their reliance on historical data and behavioral models. This paper presents a
framework for exploring a space of functional models using graph rewriting rules and a
qualitative failure simulation framework that presents information in an intuitive manner
for human-in-the-loop decision-making and human-guided design. An example is pre-
sented wherein a functional model of an electrical power system testbed is iteratively per-
turbed to generate alternatives. The alternative functional models suggest different
approaches to mitigating an emergent system failure vulnerability in the electrical power
system’s heat extraction capability. A preferred functional model configuration that has a
desirable failure flow distribution can then be identified. The method presented here helps
systems designers to better understand where failures propagate through systems and
guides modification of systems functional models to adjust the way in which systems fail
to have more desirable characteristics. [DOI: 10.1115/1.4042913]
Introduction
The design, manufacture, and deployment of complex systems
require extensive investment of personnel, resources, time, and
money to produce systems that meet requirements [1,2]. Schedule
and cost overruns are common on large systems such as aircraft,
spacecraft, power plants, ships, and other systems [3]. A signifi-
cant percentage of schedule and cost overruns, and reduced sys-
tems capabilities as compared to original requirement documents
can be traced back to architectural decisions made during the con-
ceptual phase of system design [4]. Architectural decisions that
are made with incorrect or missing information, or that are made
with high degrees of uncertainty in the data can lead to incorrect
decisions being made that then leads to cost increases and sched-
ule slips [5]. As a result, it is important that architectural decisions
are made with good, complete information to increase the likeli-
hood of systems being delivered on time, on budget, and meeting
requirements.
Of particular interest to this research is how potential system
failures are assessed and acted upon during the conceptual phase
of system design. Common techniques of identifying failure risks
and then mitigating them such as failure mode and effects analysis
[6] and probabilistic risk assessment (PRA) [7,8] can miss emer-
gent system behaviors and, while some information is provided to
designers to aid in decision-making, little guidance is given on
specific flow impacts due to failure events. Extensive work has
been done to understand failure paths from a component and/or
functional basis [916] but comparatively little effort has been
expended in looking at flows of material, energy, and data through
systems, and how their disruption or failure can impact overall
system failure.
Specific Contributions. This paper contributes a method to
identify functional models that have a desirable distribution of
flow failure events across a large space of failure scenarios. The
method identifies flows that are most often associated with fail-
ure events and automatically explores a variety of potential alter-
native functional models to identify models that have lower flow
failure concentrations. Visualizations of these alternatives are
presented to the user, allowing quick iteration of functional
architectures in the context of limited embodiment information.
This contribution arises from the combination of a generative
approach for building functional models and an evaluation
approach that qualitatively simulates the failure performance of
each functional model.
Related Work
This work contributes a concept exploration method grounded
in the historical behaviors and failures of similar systems. This
section describes relevant past work upon which this method is
built, in areas including conceptual design, risk and reliability
analysis, and computational support for these activities.
Conceptual Design. Within the conceptual phase of design,
there are several distinct steps including (1) ideation, (2) early sys-
tem architecture studies, (3) and system modeling and trade stud-
ies [17]. During the last step of conceptual design, high-level and
black box models produced in the previous step are refined into
subsystem, functional, and component models [18]. A variety of
modeling techniques and methods are commonly used to help
2
Present address: Department of Mechanical Engineering, Technical University
of Denmark, Lyngby DK-2800 Kgs, Denmark.
3
Corresponding author.
1
An earlier version of this article was presented at the ASME International
Design Engineering Technical Conferences and Computers and Information in
Engineering Conference, Quebec City, Quebec, Canada, Aug. 26–29, 2018, as Paper
No. DETC2018-85490.
Contributed by the Computers and Information Division of ASME for publication
in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript
received August 26, 2018; final manuscript received February 14, 2019; published
online March 18, 2019. Assoc. Editor: Jitesh H. Panchal.
This work is in part a work of the U.S. Government. ASME disclaims all interest
in the U.S. Government’s contributions.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-1
Copyright V
C2019 by ASME
make informed decisions based on trade studies such as functional
models; risk, reliability, failure, availability, and robustness mod-
els; and other related modeling and assessment methods
[6,1925]. These design decisions directly impact later subsystem
and component design, and if made incorrectly due to a lack of
information or a misunderstanding of the fundamental nature of
the system’s design, significant rework and redesign costs can be
incurred [26,27]. Timely information on which to base design
decisions is critical for the delivery of an on-time and on-budget
system that performs as intended [20,25].
Functional and Flow Modeling. A number of modeling para-
digms exist to model systems during conceptual design [28,29].
Of particular interest to this research, functional and flow methods
of modeling systems during the conceptual phase of design can be
used to help free engineers from component considerations and
allows more creativity with finding new system design solutions
[18]. While there are many different functional and flow taxono-
mies and grammars [3060], this research uses the functional
basis for engineering design taxonomy [18] (herein referred to as
FB) to represent functions and flows within systems. The FB tax-
onomy abstracts functions and flows from the physical compo-
nents and transported material, energy, or data that they represent.
Of particular value is the potential for simulating abstract models
constructed using FB, which is possible so long as that model has
(1) topological consistency and (2) conservation of material and
energy [61].
The functional basis has several attributes that make it attrac-
tive in this context. First, its abstract mixture of human-
interpretable and physics-based language makes it suitable for
both simulation and feedback. Second, FB has large and growing
popularity in the design methodology community, as evidenced
by over 1200 citations on the FB original article [18]. This is
important because (1) there is an existing body of work upon
which to build and (2) a critical mass of adoption is useful to
when model libraries are involved. However, many other func-
tional and flow taxonomies and grammars may also be used.
Engineers and designers working on different projects and differ-
ent systems within different organizations may find benefits and
drawbacks to specific functional and flow taxonomies and gram-
mars. Reviewing potential benefits and drawbacks of functional
and flow taxonomies and grammars is beyond the scope of this
article.
Conceptual to Component Design. Grammar rules have been
developed to aid designers and automated design tools in identify-
ing conceptual design configurations that are likely to be realiz-
able in physical component design [6264]. Helms and Shea [65]
prescribe a general approach for synthesis of product architectures
using the function behavior structure framework [66]. This model
supports synthesis of component architectures from a functional
model, and makes explicit the need for simulation and evaluation
to close the synthesis loop. Similarly, Kurtoglu and Campbell
developed grammar rules to convert functional models into
component-level configuration flow graphs [62]. More specific to
the domain of functional architectures, Sridharan and Campbell
[63] generated 69 grammar rules from 30 products located in the
design repository [67] to create a framework for generating func-
tional models.
It should be noted that there is significant heterogeneity of mod-
eling languages in which grammars are implemented. For this
research, the selection of the FB modeling taxonomy was inten-
tional. Not only is FB a functional description with high general-
ity, but there exist several computational tools for evaluating FB
models which is required to close the computational design syn-
thesis loop. The recent development of several simulation
approaches to evaluate failures in functional models [10,11,68]
enables a new generative design loop for examining reliability of
functional architectures.
Decision Support Tools. Evaluation methods and decision
support tools have been developed to aid systems designers to
make conceptual architectural decisions. These methods and
tools can be broadly categorized as: simulation-function,
simulation-component, expert knowledge and experience, and
historical function/component. A high-level review of tools
useful for failure analysis and related analysis techniques that
fall within the four categories listed previously is provided
below.
Simulation-Function. Within the simulation-function cate-
gory, the function failure identification and propagation (FFIP)
method and related flow state logic method identify potential
failure flow pathways through a functional model [9,10]. In the
context of this work, failure flow is defined as a flow that either
is unexpectedly present or a flow that is unexpectedly absent.
The inherent behavioral in functional models (IBFM) framework
extends FFIP to include the ability to generate multiple func-
tional models to drive toward a solution that can balance the cost
and risk of a system, and a pseudo time-step [16,68,69]. A num-
ber of other risk and failure analysis tools have been developed
from FFIP including the uncoupled failure flow state reasoner
[11,70], a method of building prognostic systems in response to
failure modeling [12], and other related methods and tools
[13,14,7173]. Several tools for ontology-driven metamodeling
and early conceptual design down-selection were produced as
part of the Defense Advanced Research Program Agency
(DARPA) adaptive vehicle make project [7476]. While these
methods are useful for identifying and understanding failure
sources within a system, they generally lack the ability to iden-
tify specific flow paths that are more often implicated in potential
system failure events.
Simulation-Component. Several simulation-component meth-
ods exist including the reliability block diagram method [77]
widely used in industry and a method developed by O’Halloran
et al. that simulates component performance at varying levels of
fidelity based on model fidelity [78]. While these types of
methods are useful for understanding reliability of a system and
O’Halloran’s method is useful for simulating expected system
performance, both rely upon historical data. This limits the ability
of this class of method to identify emergent system behaviors.
Further, little guidance is provided by the results of these methods
to identify specific flows within the system that are at higher risk
of failure.
Expert Knowledge. Expert knowledge and experience play a
large role in several methods that are important to industry. Fail-
ure mode and effects analysis [6] and the related failure modes,
effects, and criticality analysis [79] use expert knowledge and sys-
tem experience to identify and understand potential failure scenar-
ios within a proposed system. Expert elicitation is often used in
producing fever charts and other graphical representations of risk
within a system [80]. Expert knowledge and experience methods
in general do not adequately capture potential emergent system
behaviors—especially complex failure events.
Historical Function/Component. Several methods have
examined the link between historical performance of functions
and components, and their expected behavior in new systems. The
function failure design method [81] provides a matrix-based
approach to linking a function to potential component solution
failure modes. The risk in early design method [82] connects his-
torical risk information to ongoing design efforts and provides a
fever chart view for ease of understanding by novice risk analysts.
While these methods do well at identifying historical failure infor-
mation on a functional level, they do not adequately uncover
emergent system behaviors.
031001-2 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
Risk and Failure Analysis. Many other methods of failure and
risk analysis exist that can help system designers to make risk and
failure-informed architectural decisions during conceptual design.
PRA combines fault tree analysis and event tree analysis [7,8]
with an analysis of potential initiating events that can lead to
failure [83]. The nuclear industry heavily uses PRA to identify
potential emergent system behaviors and ensure safety of nuclear
power plants [84]. A popular method of identifying potential fail-
ures uses Markov chains that are built to model state transitions in
a system where probabilities of state transitions are known or can
be assumed. The Markov chains are then randomly walked using
Monte Carlo sampling to determine the probability of being in
each state [8588]. The Markov chain Monte Carlo sampling
approach is especially applicable in the PRA context (e.g.,
Ref. [89]) because of its relative efficiency of approximating
Bayesian posteriors. The method presented in this paper differs in
that the failure simulation is deterministic for a large set of differ-
ent state spaces. Repetition of this simulation on single functional
model occurs only by sampling from different combinations of
initiating failure events.
Relevant Simulation Details. Given that the method presented
in this paper is intended to facilitate exploration over a population
of graphs, some heuristics are necessary to combat combinatorial
explosion. Subsampling a representative space achieves this goal,
but requires a method to calculate graph similarity prior to evalua-
tion. Graph similarity algorithms can be classified as edit distance,
feature extraction, and iterative [90]. Feature extraction is selected
here due to simplicity of implementation, speed of evaluation, and
existing evidence for a correlation between graph-level features
(e.g., diameter and node degree) and system-level reliability (e.g.,
Refs. [91] and [92]). Additionally, the bag-of-functions feature
approach has been successfully used to measure similarity
between functional models [93,94].
In the area of software debugging with model checking, one
common strategy is to validate an abstraction of values, states,
and transitions [95]. This type of model checking is in many ways
analogous to the approach presented in this paper. While both exe-
cute abstractions of the system to search for issues, the method
presented in this paper combines a formalism for abstracting and
simulating complex systems with a means to search the design
space.
In summary, the conceptual phase of the systems engineering
design process provides systems designers with an opportunity to
make significant architectural decisions that can drastically impact
the outcome of the design process and the performance of the sys-
tem. A variety of tools and methods are available to help support
engineers in making informed decisions during the conceptual
phase. Many such tools and methods rely on functional modeling
techniques and a number of methods exist to analyze failure
within this context. However, none of the existing methods sur-
veyed is able to directly assess failures from a flow perspective
over a space of related functional models and use that information
to help make architectural decisions.
Methodology
The method presented below is specifically intended for use
during the conceptual phase of design when architectural deci-
sions are being made and the design has not been finalized. The
method’s inputs include a single functional model from the user, a
library of IBFM [15,68,69] simulation components, and (option-
ally) a specification of each IBFM state’s probability to serve as
an initiating failure event. The method’s output is a visualization
of several alternative functional models and the vulnerability of
each flow therein to failures. Figure 1graphically depicts the
method. The first three steps are preparatory steps to develop a
functional model of the system, develop the IBFM simulation,
and specify probabilities of failures. The next five steps are the
core of the methodology where new functional models are
automatically generated, validated, simulated, evaluated, and then
the process is iterated to create additional new child populations
of functional models. The final step in the methodology is used to
select the most desirable functional model to proceed forward
with in the design process.
Develop Functional Model. The first step is for the designer to
create a functional model for the system of interest. This model
takes the form of a directed graph where nodes are typed accord-
ing to the functions they represent and edges are typed according
to flows the flows they represent. This model will be used as a
seed to begin the process of analyzing failure flows.
Develop Inherent Behavioral in Functional Models Simula-
tion. Given a seed model, an IBFM simulation is prepared [15].
This simulation must capture the designer’s abstract knowledge
about the system. This includes the following:
(1) Functions, including the operational modes and mode tran-
sition conditions applicable to each.
(2) Flows.
(3) Modes and the associated flow behaviors associated with
each.
Fig. 1 The method presented here includes nine distinct
steps, as shown in this graphic
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-3
(4) Conditions and the flow state behavior associated with trig-
gering them.
Given these elements, IBFM enables qualitative simulation of
the functional model. More details about IBFM can be found in
Ref. [15].
Specify Probabilities. The method presented in this paper can
be performed with either internal initiating events caused by failed
modes of functions within the system or by external events that
occur outside of the system boundary and propagate into the sys-
tem as failure flows. The case study below uses internal initiating
events as a demonstration.
For internal initiating events, each failed mode of each function
is treated as equally likely to occur as the default approach.
However, if a probability of occurrence is known for an internal
initiating event, then that probability is used instead. With exter-
nal initiating events, the authors recommend only using probabil-
ities that are grounded in reality and are realistic. When not using
probabilities specific to a function’s failed state, the frequency of
occurrence of failure flows associated with each flow can be ascer-
tained on a normalized basis. With specific probabilities available,
these frequencies can be weighted according to their expected
likelihood.
Automatically Generate Similar Functional Models. Using
the designer’s functional model as a seed automatically generates
locally similar functional models according to a limited set of
graph grammar rules (e.g., Table 1). These grammars perturb the
model by removing functions and by re-inserting functions that
are already present—new functionality is not added. The result is
a means to generate different functional architectures while pre-
serving the gist of the design intent. These grammars must be
capable of both adding and removing elements, and must conform
to topological consistency and conservation rules for FB.
Validate Automatically Generated Functional Models. For a
functional model to be simulatable, two main requirements must
be met: (1) conservation of mass and energy, and (2) each func-
tion’s inputs and outputs must be consistent with established
semantics [61]. This can be done at generation time through care-
ful construction of grammars, or naively by iteratively discarding
noncompliant models and then generating replacements. Active
model checking requires software that captures the two
requirements—like that developed in Ref. [61].
Run Simulation on Each Functional Model. Next, each
model in the population is simulated using IBFM. By default, an
IBFM experiment runs simulations using every possible failure
state as an initiating event. Scenarios are then run for all paired
combinations of simultaneous initiating events, and the number of
simultaneous events increases until a prescribed cutoff. The
failure rate of each flow in the model is captured as described in
Algorithm 1.
Depending on the available computing power, this simulation
can be repeated with valid combinations of multiple initiating
events. While here it is recommended to characterize each model
according to its most vulnerable edge max(F), other performance
measures can be used (e.g., the mean and variance of the edge
failure frequency distribution).
Iterate Best Performing Models. Iteration consists of two
steps: (1) selecting a parent population and (2) generating a child
population.
A diverse parent population of models is sampled from this local
space using roulette wheel selection (with replacement) and a per-
formance measure that linearly combines resiliency R(defined as
the ability of the system to continue to function in spite of failure
events occurring) and uniqueness U(as proposed in Eq. (1)). A
model’s resiliency Ris normalized to the maximum resiliency in
the population R
population_max
. A model’s uniqueness Ucan be quan-
tified by applying a clustering algorithm such as density-based spa-
tial clustering of applications with noise [96] and then taking the
inverse of the number of total models in that model’s cluster. A full
pairwise distance matrix between models is needed to support this
clustering and can be generated from the graph feature representa-
tion using cosine distance. A weighting factor kbetween 0 and 1
captures preference for resiliency versus uniqueness
pselection ¼kR
Rpopulation max
þ1k
ðÞ
U(1)
Next, a child population is generated by applying one randomly
selected grammar rule to each parent in a randomly selected loca-
tion. If a branching factor greater than 1 is applied, the process
closely resembles breadth first tree search. If so, pruning the child
Table 1 Naive generative grammar language
Rule Recognize Apply
Add parallel path Any two edges on the graph with a valid connecting path Add a parallel copy of the shortest path between those
edges.
Add parallel subgraph Any two edges on the graph with a valid connecting path Perform “Add Parallel Path” for all paths in between those
edges. Propagate copy forward and backward to satisfy
conservation of mass and energy.
Add series Any function Insert a copy of function in series connected by function’s
own flow type.
Remove node Any function Remove that function and connected flows. Repeat on
nodes that fail a validation check until model is valid or
empty.
Algorithm 1 Functional model population simulation process
1: for each model Min the population do
2: Initialize a zero vector of failure counts Fto capture the failure fre-
quencies of all flow edges in the model
3: Generate a list of scenarios Scontaining initiating events and their
corresponding nodes
4: for each specified scenario Sdo
5: Simulate Munder conditions of Suntil the model reaches steady-
state
6: for each failed edge in the resulting Mdo
7: Increment its total failure count in F, normalized by the proba-
bility of the initiating event
8: end for
9: end for
10: Take max(F) to describe this model’s resiliency
11: end for
031001-4 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
population back to the initial population size after simulation miti-
gates combinatorial explosion of IBFM simulations. This process
is visualized in Fig. 2.
Stop Iteration After Performance Metrics Have Been Met.
The steps of generating models, simulating their performance, and
iterating are repeated until stopping criteria are met.
Two parameters capture the stopping criteria: The first dictates
an acceptable level of uniqueness Uspecified by the user. The
second dictates a performance threshold (in this case model resil-
iency Ris quantified by the model’s most vulnerable edge). When
there exists a set of Nmodels (where Nis user-specified) in the
most recent generation where all Nmodels exceed the perform-
ance threshold and the uniqueness threshold, the process stops.
Given that the population size is held constant, it is feasible to
quantify the uniqueness of each model via clustering on the full
pairwise comparison matrix using vector space similarity meas-
ures (e.g., cosine similarity) and the child’s lineage. An alternative
approach for large populations of constant size halts the search
when the explained variance ratio of the principal component
analysis of the data set’s feature representation dips below a given
threshold.
Assess Final Population of Functional Models. After the
stopping criteria are met, a subset of models is selected from the
full history of generated models. These models are selected to
possess (1) high or low resiliency as desired and (2) high unique-
ness with respect to each other. Because the relationship between
a functional model and its performance in simulation is ambigu-
ous, it is valuable to allow the user to decide between a variety of
local optima. The purpose of using uniqueness to draw a final pop-
ulation of models is to provide samples from different localities in
the final results, as these are more likely to solve the problem in
meaningfully different ways. This enables a user to choose the
best architecture strategy to fit the design context.
When visualizing the resulting models, the rates at which flows
failed are indicated by both thickness and color of the edges. The
functionality to show both good and bad examples is motivated by
conceptual design exploration tools like morphological evaluation
machine and interaction conceptualizer [97], which provides crea-
tive stimulus by showing both highly common and highly uncom-
mon component configurations to match a given functional
model. Given this stimulus, the designer can assess which topol-
ogy to pursue and iterate upon, or draw inspiration to make tweaks
to the functional model.
Any number of methods can be used for determining unique-
ness U, though all but the most naive will rely on some means of
clustering the final population. This may include straightforward
clustering (e.g., k-means), projection of the bag-of-features repre-
sentation into lower dimensions (e.g., principle component analy-
sis), or sampling from far-apart sections of the search tree
according to each model’s lineage.
Case Study
This section contains an illustrative case study based on a real-
world system to demonstrate the workings of the method pre-
sented previously. It should be noted that the example, while
based on a real, physically embodied system that has significant
heritage and pedigree as a research platform and is relevant to real
hardware flown on the space shuttle and future American crewed
spacecraft, has been intentionally fictionalized. In specific, the
functional model has been simplified and the failure results, while
representative, are not exhaustive. No claim to accuracy is made.
The results of the case study are illustrative of the method’s
capabilities but cannot be taken as evidence of how to design the
specific system presented below without further expansion, refine-
ment, and verification of the analysis. The authors explicitly state
that no real-world design decisions should be made using the
information presented here without doing an appropriate, com-
plete, and sufficiently detailed analysis. The case study presented
here is for demonstration purposes only.
The following case study demonstrates the mechanism of the
method on a simplified functional model of the advanced diagnos-
tics and prognostics testbed electrical power system testbed [98]
which was designed to be analogous to power systems found on
the space shuttle and future crewed American spacecraft. Various
model descriptions of this system have been used in prior work to
demonstrate failure simulation in conceptual design for FFIP [9]
and IBFM [15]. In general, the model consists of a battery, an
inverter, and three loads—a fan, a pump, and an indicator light.
The model also contains a switch and several breakers. The func-
tionality of this system—which is used as a seed model—is cap-
tured in Fig. 3. The remainder of this section will address the
question, “in what ways might we redesign the functional archi-
tecture of this system to improve system reliability?”
For this example, the IBFM simulation is specified as in
Ref. [15], and failure mode probabilities are assumed to be
equal—analogous to a noninformative prior.
After specifying the seed model to define the local search space,
alternatives are iteratively generated. To facilitate this example, a
simple set of grammar rules is shown in Table 1. A much more
comprehensive and data-driven graph rewriting language for func-
tional models of electromechanical products was presented in
Ref. [63]. Figure 3shows an application of the rule “add parallel
subgraph” between two randomly selected edges, indicated by the
dashed lines. The backbone of the inserted subgraph is shown via
the same dashed lines. Additional nodes and edges are added to
this new subgraph until the resulting model adheres to conserva-
tion of mass and energy. These additional components are indi-
cated with long dashed lines.
This process is repeated to generate a population of randomly
perturbed models in the local design space. Next, each model in
the population is simulated using IBFM, and a score is calculated
for the performance of each model. Snippets of two failure heat
maps for two generated concepts are shown in Figs. 4and 5.
These snippets capture the flows with the highest failure rate in
each model. While the model in Fig. 4would be characterized by
its highest flow failure rate of 50, the model in Fig. 5would be
Fig. 2 Visualization of roulette wheel sampling with branching
factor of 1. Generated models expand outward into the search
space toward local regions that are potentially interesting (as
opposed to optimal). Higher fitness is represented as light, and
lower fitness as dark. When the search concludes, results are
selected for presentation to the user with respect to perform-
ance and global uniqueness.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-5
quantified according to its (comparatively better) worst-case flow
failure rate of 35. It should be noted that while this case study
uses low failure rate, medium failure rate, and high failure rate as
generic terms, a real-world analysis performed using the method
would set these terms to numeric values that are appropriate to the
specific system being analyzed and the customer.
Next, candidates from the current population are selected for iter-
ation according to performance and uniqueness, as illustrated in
Fig. 2. While Fig. 4has poor performance, it may still have a high
probability of selection if it is extremely different from the rest of
the current population. After selection, the next generation is itera-
tively resampled and created until the stopping criteria are met.
Ultimately, a series of varied heat maps as shown in Figs. 4
and 5are presented to the user. Based on the model in Fig. 4,a
user may realize that they need to pursue alternative functions for
cooling the inverter function, while the model in Fig. 5may per-
suade the user to investigate adding parallel cooling functionality.
Discussion
The method presented in this paper contains several benefits for
practitioners as well as a few open questions on the philosophy of
failure events. This section discusses the benefits and open ques-
tions of the method.
A significant benefit of the method is the ability for systems
engineers to identify functional models that conform to desired
flow failure concentration levels. The systems engineer can drive
model iteration toward either a highly concentrated flow failure
paradigm or a distributed flow failure paradigm. While the case
study mentioned previously demonstrates evolving a model
toward a solution that distributes failure flow concentrations
across the model by adding in redundancy, specific system design
considerations may warrant concentrating failed flows into a few
specific flows. Concentrating failure flow into a few flows may be
beneficial, for instance, if systems engineers are including sacrifi-
cial subsystems [72]. In other situations, it may be beneficial to
spread out failure flows across several redundant subsystems [99].
No other method that the authors are aware of provides practi-
tioners with the ability to easily understand what flow paths fail-
ures preferentially follow as the model changes. As compared to
standard IBFM, this generative method provides insights into how
the distribution of emergent failures changes with subtle shifts in
functional architecture. Additionally, most other function-and-
flow-based methods of failure and risk analysis used during the
conceptual phase of system design are focused on failure of func-
tions. Examining the flows rather than the functions can provide
new insights into which flows are the most likely to be implicated
in failure events. This in turn can lead to systematic design efforts
to mitigate those specific failure flows.
A benefit of the heat mapping of failure flow concentrations is
that emergent failure flow behaviors that otherwise would be
missed can be examined by systems engineers. This may provide
new insights into emergent system behavior that otherwise would
not be available. Emergent system behavior is a significant con-
cern in complex systems and has been implicated in several past
noteworthy failures [100102].
It should be noted that this is a stochastic design space search
method with a loose definition of optimality. Because the goal of
this method is to facilitate human-in-the-loop exploration of sys-
tem concepts, Pareto optimality (as a function of performance and
uniqueness) is useful only as an approximation. Uniqueness in
particular depends on contextual factors including the designer’s
preferences and the other models in the population. Further,
designers should be aware of the limitations of arrow’s theorem
with respect to multivariable optimization, especially with
human-guided preferences [103].
Changing the underlying probabilities of the functions failing
results in changed failure flow path likelihoods. This is in line
Fig. 3 Functional model of electrical power system
031001-6 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
with how “cut set” probabilities in PRA change when the basic
event failure probabilities are modified. It may be useful for prac-
titioners to perform sensitivity studies by varying probabilities in
the models to determine if specific failure flow paths have consis-
tently high failure rates. In such a scenario, the failure flow paths
with consistently high failure rates likely should be mitigated to
reduce the failure rate to a more acceptable level.
Predictive methods based on behavior models are sensitive to
modeling assumptions, while methods based on historical data are
sensitive to the particular characteristics and operating conditions
in which historical data is collected. While many components,
subsystems, and systems are similar to those previously con-
structed with respect to idealized behaviors and failure probabil-
ity, true novelty may cause such assumptions to be invalid.
However, PRA and other risk and failure methods are largely
underpinned on the concept that historical failure information is a
valid data source [104]. Another potential issue is that historical
data will only include information on what has happened. This
may cause specific failures that have not been observed but that
could occur to be missed in the analysis [105]. The authors advise
practitioners to carefully examine if a truly novel system, subsys-
tem, or component is being included in the models and if so, addi-
tional work to determine realistic behavior modes and failure
probabilities is warranted. Further, if sufficient historical data are
not available to assuage the practitioner that all likely failures
have been observed, additional work in identifying potential fail-
ures may be necessary.
While this article defines failure flow as a flow that either is
unexpectedly present or a flow that is unexpectedly absent [10],
there are a variety of other related definitions of failure flow and
the concept of a failure moving along flow paths through the func-
tional model of a system. For instance, failure flow can be defined
in the context of a failure moving between components or func-
tions [106]. Failure flow can also be defined as there being too
Fig. 4 A snippet heat map of a model with poor performance. The fan module fails in
many scenarios, indicated as a high failure rate in the flows related to cooling the
inverter. In some cases, the failure propagates to the flows related to the inverter,
which increases the failure rate of those flows.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-7
high or too low of a flow [12], as a transient non-nominal condi-
tion in a flow that causes a steady-state failure in a function
[14,16], as the reversal of a flow [107], or as a failure that jumps
between functions without following a nominal flow path [11]. A
more expansive definition of failure flow may be useful in expand-
ing the capabilities of the method presented in this article. How-
ever, including more types of failure flows may significantly
increase the complexity of the simulations and preparatory work
which may lessen the usefulness of the method as an ideation tool
during conceptual system design. Identifying how to strike a mid-
dle ground between a restrictive definition and an expansive defi-
nition of failure flow may be a fruitful area of future research.
Validating the results of the method presented in this article is
an important step that must be performed by a human. The
method has been designed with the expectation that a human is
included in the loop of iterating upon and evaluating new func-
tional models, in order to validate that those models are reasona-
ble in the context of the system being analyzed. While it may be
possible to fully automate the method with a sufficiently robust
model library and extensive graph grammars, achieving a suffi-
ciently high level of accuracy to support full automation may be
overly burdensome on the practitioner. This method is meant to
be used in conceptual system design when rapid analysis and
design iterations are desirable. Other potential methods of
partially validating the underlying method exist, although such
validations have already been conducted in in the literature. For
instance, the simulation may be validated but the simulation
approach (IBFM) is a separate work that has already been pub-
lished [15,68]. Validation of whether alternative functional mod-
els are useful in ideation is already widely accepted in the
literature as well [108]. While it would be possible to validate
whether alternative functional models generated from a library of
behavior simulations are useful for ideation, such an approach
requires a significant and robust model library that does not cur-
rently exist. Creation of a robust model library would likely only
be useful for a specific class of systems and would not guarantee
validity of the method for other systems. For these reasons, the
authors recommend that a human remain in the loop to provide a
“sanity check” on ideated functional models, to perform selection
of the best models to be further iterated, and to determine when to
terminate the iteration loop.
One open area of research on the method presented previously
is what happens in the case where two models are simulated
where one has no redundancies and the other has many parallel
redundancies. IBFM currently does not heavily penalize the cost
of adding new nodes. It may be desirable to adjust the penalty
function parameters for adding redundancies to a system model to
assist in the trade-off between the costs associated with adding
redundancy and the benefits of added redundancy to mitigate
potential failures. However, systems engineers must consider if
parallel flow redundancy provides true benefit in stopping a failure
flow before the flow leads to system failure, or if redundant flows
merely provide alternative pathways to system failure as in the
case of a drop in electrical voltage propagating through redundant
power feeds in a data center. In the data center’s case, had the
energy flows been truly independent and redundant, a failure flow
caused by a voltage drop on one of the power lines coming into
the facility likely would not have impacted the other redundant
power lines and electrical distribution systems in the facility.
An area of future work is to combine the concept of “cut sets”
derived from PRA and used in some FFIP-based methods with the
vulnerability of each type of flow, redundant subsystems, and
comparing different models with global metrics (e.g., ratio of
failed flows per model). Further refining the IBFM’s method of
optimization within the context of the method presented in this
paper is expected to be a useful area of further research.
Another potential area of future work centers on how probabil-
ities are calculated and assigned to the functional model. While
this research assigns informative priors (i.e., probability distribu-
tions that are grounded in empirical data of past failures on the
same or similar components or functions), it may be useful to look
at noninformative priors (i.e., probability distributions that are
uniform for all functions or components and have no historical
knowledge of component or function performance).
While many PRA methods are by definition concerned with
both the likelihood and consequence of failures, the approach in
Fig. 5 A snippet heat map of a model with medium performance. In this case, grammar rules have added an additional sub-
graph for exporting material, which led to a reduced rate of failure in the associated flows.
031001-8 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
the paper addresses only likelihood. Because of the high level of
abstraction of functional models, and the necessity of using con-
textual information to assess the consequences of a failure, evalu-
ating failure severity is purposely left to the user. The challenge
of capturing context and failure consequences is deferred to future
work.
Conclusion
The framework presented in this paper represents a way to gen-
eratively explore a space of functional models, assess their vulner-
ability to failure, and present a designer with a variety of
alternative options. The approach is human-in-the-loop; the
designer must interpret the results according to the specific con-
text of the problem. Given a library of IBFM models and a graph
rewriting language for perturbing functional models, this approach
enables a designer to make quick risk-of-failure-informed-deci-
sions about functional architectures. These decisions are founded
not on only experience or historical data, but on (1) qualitative
simulation of potential failure propagation and (2) a set of
solutions automatically generated to mitigate those failures. This
allows systems designers to make large system architectural deci-
sions very early in the conceptual design process where the cost
of making decisions and significantly changing the design is rela-
tively inexpensive both in cost and in schedule time.
Acknowledgment
This research is partially supported by the Naval Postgraduate
School, the Singapore University of Technology and Design, the
Technical University of Denmark, and Oregon State University.
The authors have previously been or are currently employed by
the aforementioned institutions during the development of this
article.
References
[1] Wald en, D. D., Roedler, G. J., Forsberg, K., Hamelin, R. D., and Shortell, T.
M., 2015, Systems Engineering Handbook: A Guide for System Life Cycle
Processes and Activities, Wiley, Hoboken, NJ.
[2] Ullman, D. G., 2015, The Mechanical Design Process, McGraw-Hil l Science/
Engineering/Math, New York.
[3] Browning, T. R., and Eppinger, S. D., 2002, “Modeling Impacts of Process
Architecture on Cost and Schedule Risk in Product Development,” IEEE
Trans. Eng. Manage.,49(4), pp. 428–442.
[4] Browning, T. R., 1998, “Sources of Schedule Risk in Complex System Devel-
opment,” Eighth Annual International Symposium of INCOSE, Vancouver,
BC, Canada, July 26–30, pp. 129–142.
[5] Wang, J. X., and Roush, M. L., 2000, What Every Engineer Should Know
About Risk Engineering and Management, CRC Press, New York.
[6] Stamatis, D. H., 2003, Failure Mode and Effect Analysis: FMEA From Theo ry
to Execution, ASQ Quality Press, Milwaukie, WI.
[7] Ericson, C., 2005, “Event Tree Analysis,” Hazard Analys is Techniques for
System Safety, Wiley, Hoboken, NJ, pp. 223–234.
[8] Ericson, C. A., 2015, “Fault Tree Analysis,” Hazard Analysis Techniques Sys-
tem Safety, Wiley, Hoboken, NJ, pp. 183–221.
[9] Kurtoglu, T., Tumer, I. Y., and Jensen, D. C., 2010, “A Functional Failure
Reasoning Methodology for Evaluation of Conceptual System Architectures,”
Res. Eng. Des.,21(4), pp. 209–234.
[10] Jensen, D., Tumer, I. Y., and Kurtoglu, T., 2009, “Flow State Logic (FSL) for
Analysis of Failure Propagation in Early Design,” ASME Paper No.
DETC2009-87064.
[11] O’Halloran, B. M., Papakonstantinou, N., and Van Bossuyt, D. L., 2015,
“Modeling of Function Failure Propagation Across Uncoupled Systems,” Reli-
ability and Maintainability Symposium (RAMS), Palm Harbor, FL, Jan.
26–29, pp. 1–6.
[12] L’her, G., Van Bossuyt, D. L., and O’Halloran, B. M., 2017, “Prognostic Sys-
tems Representation in a Function-Based Bayesian Model During Engineering
Design,” Int. J. Prognostics Health Manage.,8(2), p. 23.
[13] O’Halloran, B. M., Papakonstantinou, N., and Van Bossuyt, D. L., 2016,
“Cable Routing Modeling in Early System Design to Prevent Cable Failure
Propagation Events,” Reliability and Maintainability Symposium (RAMS),
Tucson, AZ, Jan. 25–28, pp. 1–6.
[14] Dempere, J., Papakonstantinou, N., O’Halloran, B., and Van Bossuyt, D. L.,
2017, “Risk Modeling of Variable Probability External Initiating Events,”
Reliability and Maintainability Symposium (RAMS), Orland, FL, Feb. 23–26,
pp. 1–9.
[15] McIntire, M. G., Keshavarzi, E., Tumer, I. Y., and Hoyle, C., 2016,
“Functional Models With Inherent Behavior: Towards a Framework for Safety
Analysis Early in the Design of Complex Systems,” ASME Paper No.
IMECE2016-67040.
[16] Dempere, J., Papakonstantinou, N., O’Halloran, B., and Van Bossuyt, D. L.,
2018, “Risk Modeling of Variable Probability External Initiating Events in a
Functional Modeling Paradigm,” J. Reliab., Maintainability, Supportability in
Syst. Eng.
[17] Otto , K., and Wood, K., 2001, Product Design: Techniques in Reverse Engi-
neering and New Product Design, Prentice Hall, Upper Saddle River, NJ.
[18] Stone, R. B., and Wood, K. L., 2000, “Development of a Functional Basis for
Design,” ASME J. Mech. Des.,122(4), pp. 359–370.
[19] Sage, A. P., and Rouse, W. B., 2009, Hand book of Systems Engineering and
Management, Wiley, Hoboken, NJ.
[20] Kapurch, S. J., 2010, NASA Systems Engineeri ng Handbook, Diane Publish-
ing, Hanover, MD.
[21] Cornford, S. L., Feather, M. S., and Hicks, K. A., 2001, “DDP—A Tool for
Life-Cycle Risk Management,” Aerospace Conference, Big Sky, MT, Mar.
10–17, pp. 1–441.
[22] Stamatelatos, M., Dezfuli, H., Apostolakis, G., Everline, C., Guarro, S.,
Mathias, D., Mosleh, A., Paulos, T., Riha, D., Smith, C., Vesely, W., and
Youngblood, R., 2011, “Probabilistic Risk Assessment Procedures Guide for
NASA Managers and Practitioners,” National Aeronautics and Space Admin-
istration, Hanover, MD, Report No. NASA/SP-2011-3421.
[23] Otto, K. N., and Antonsson, E. K., 1991, “Trade-Off Strategies in Engineering
Design,” Res. Eng. Des.,3(2), pp. 87–103.
[24] Estefan, J. A., 2007, “Survey of Model-Based Systems Engineering (MBSE)
Methodologies,” INCOSE MBSE Focus Group,25(8), pp. 1–12.
[25] Wertz, J. R., Everett, D. F., and Puschell, J. J., 2011, Space Mission Engineer-
ing: The New SMAD, Microcosm Press, Hawthorn, CA.
[26] Sen, P., and Yang, J.-B., 2012, Multiple Criteria Decision Support in Engi-
neering Design, Springer Science and Business Media, London.
[27] Blanchard, B. S., and Fabrycky, W. J., 1998, Systems Engineering and Analy-
sis, Prentice Hall, Upper Saddle River, NJ.
[28] Haskins, C., Forsberg, K., Krueger, M., Walden, D., and Hamelin, D., 2006,
Systems Engineering Handbook, INCOSE, Hoboken, NJ.
[29] Friedenthal, S., Moore, A., and Steiner, R., 2014, A Practical Guide to SysML:
The Systems Modeling Language, Morgan Kaufmann, Waltham, MA.
[30] Erden, M. S., Komoto, H., van Beek, T. J., D’Amelio, V., Echavarria, E., and
Tomiyama, T., 2008, “A Review of Function Modeling: Approaches and
Applications,” AI Edam,22(2), pp. 147–169.
[31] Houkes, W., and Vermaas, P. E., 2010, Technical Functions: On the Use and
Design of Artefacts, Vol. 1, Springer Science and Business Media, Berlin.
[32] Chakrabarti, A., Shea, K., Stone, R., Cagan, J., Campbell, M., Hernandez, N.
V., and Wood, K. L., 2011, “Computer-Based Design Synthesis Research: An
Overview,” ASME J. Comput. Inf. Sci. Eng.,11(2), p. 021003.
[33] Umeda, Y., Takeda, H., Tomiyama, T., and Yoshikawa, H., 1990, “Function,
Behaviour, and Structure,” Applications of Artificial Intelligence in Engineer-
ing, J. S. Gero, ed., Springer, Berlin, pp. 177–193.
[34] Umeda, Y., Ishii, M., Yoshioka, M., Shimomura, Y., and Tomiyama, T., 1996,
“Supporting Conceptual Design Based on the Function-Behavior-State Mod-
eler,” AI Edam,10(4), pp. 275–288.
[35] Umeda, Y., Tomiyama, T., and Yoshikawa, H., 1995, “FBS Modeling: Model-
ing Scheme of Function for Conceptual Design,” Ninth International Work-
shop on Qualitative Reasoning, Amsterdam, The Netherlands, pp. 271–278.
[36] Umeda, Y., and Tomiyama, T., 1997, “Functional Rea soning in Design,”
IEEE Expert,12(2), pp. 42–48.
[37] Tomiyama, T., Umeda, Y., and Yoshikawa, H., 1993, “A CAD for Functional
Design,” CIRP Ann. Manuf. Technol.,42(1), pp. 143–146.
[38] Yoshioka, M., Umeda, Y., Taked a, H., Shimomura, Y., Nomaguchi, Y., and
Tomiyama, T., 2004, “Physical Concept Ontology for the Knowledge Inten-
sive Engineering Framework,” Adv. Eng. Inf.,18(2), pp. 95–113.
[39] Shimomura, Y., Yoshioka, M., Takeda, H., Umeda, Y., and Tomiyama, T.,
1998, “Representation of Design Object Based on the Functional Evolution
Process Model,” ASME J. Mech. Des.,120(2), pp. 221–229.
[40] Kitamura, Y., Kashiwase, M., Fuse, M., and Mizoguchi, R., 2004,
“Deployment of an Ontological Framework of Functional Design Knowl-
edge,” Adv. Eng. Inf.,18(2), pp. 115–127.
[41] Goel, A. K., Rugaber, S., and Vattam, S., 2009, “Structure, Behavior, and
Function of Complex Systems: The Structure, Behavior, and Function Model-
ing Language,” AI Edam,23(1), pp. 23–35.
[42] Goel, A. K., and Bhatta, S. R., 2004, “Use of Design Patterns in Analogy-
Based Design,” Adv. Eng. Inf.,18(2), pp. 85–94.
[43] Bhatta, S., Goel, A., and Prabhakar, S., 1994, “Innovation in Analogical
Design: A Model-Based Approach,” Artificial Intelligence in Design’94,
Springer, Berlin, pp. 57–74.
[44] Yaner, P. W., and Goel, A. K., 2006, “From Form to Function: From
SBF to DSSBF,” Design Computing and Cognition’06, Springer, Dordrecht,
The Netherlands, pp. 423–441.
[45] Bracewell, R. H., and Sharpe, J., 1996, “Functional Descriptio ns Used in Com-
puter Support for Qualitative Scheme Generation–‘Schemebuilder’,” AI
Edam,10(4), pp. 333–345.
[46] Welch, R. V., and Dixon, J. R., 1992, “Representin g Function, Behavior and
Structure During Conceptual Design,” Fourth International Conference on
Design Theory and Methodology, Scottsdale, AZ, Sept. 13–16, pp. 11–18.
[47] Welch, R. V., and Dixon, J. R., 1994 , “Guiding Conceptual Design Through
Behavioral Reasoning,” Res. Eng. Des.,6(3), pp. 169–188.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-9
[48] Deng, Y.-M., Britton, G., and Tor, S. B., 2000, “Constraint-Based Functional
Design Verification for Conceptual Design,” Comput. Aided Des.,32(14), pp.
889–899.
[49] Deng, Y.-M., 2002, “Function and Behavior Representation in Conceptual
Mechanical Design,” AI Edam,16(5), pp. 343–362.
[50] Chakrabarti, A., and Bligh, T. P., 2001, “A Scheme for Functional Reasoning
in Conceptual Design,” Des. Stud.,22(6), pp. 493–517.
[51] Chakrabarti, A., Sarkar, P., Leelavathamma, B., and Nataraju, B., 2005, “A
Functional Representation for Aiding Biomimetic and Artificial Inspiration of
New Ideas,” AI Edam,19(2), pp. 113–132.
[52] Van Wie, M., Bryant, C. R., Bohm, M. R., McAdams, D. A., and Stone, R. B.,
2005, “A Model of Function-Based Representations,” AI Edam,19(2), pp.
89–111.
[53] Gero, J. S., 1990, “Design Prototypes: A Knowledge Representation Schema
for Design,” AI Mag.,11(4), p. 26.
[54] Gero, J. S., and Kannengiesser, U., 2004, “The Situated Function–
Behaviour–Structure Framework,” Des. Stud.,25(4), pp. 373–391.
[55] Dorst, K., and Vermaas, P. E., 2005, “John Gero’s Function-Behaviour-Structure
Model of Designing: A Critical Analysis,” Res. Eng. Des.,16(1–2), pp. 17–26.
[56] Snooke, N., and Price, C., 1998, “Hierarchical Functional Reasoning,” Knowl.
Based Syst.,11(5–6), pp. 301–309.
[57] Chandrasekaran, B., and Josephson, J. R., 2000, “Function in Device Repre-
sentation,” Eng. Comput.,16(3–4), pp. 162–177.
[58] Chand rasekaran, B., 2005, “Representing Function: Relating Functional Rep-
resentation and Functional Modeling Research Streams,” AI Edam,19(2), pp.
65–74.
[59] Keuneke, A. M., 1991, “Device Representati on-the Significance of Functional
Knowledge,” IEEE Expert,6(2), pp. 22–25.
[60] Keuneke, A., and Allemang, D., 1989, “Exploring the No-Function-in-Struc-
ture Principle,” J. Exp. Theor. Artif. Intell.,1(1), pp. 79–89.
[61] Sen, C., Summers, J. D., and Mocko, G. M., 2011, “A Protoc ol to Formalise
Function Verbs to Support Conservation-Based Model Checking,” J. Eng.
Des.,22(11–12), pp. 765–788.
[62] Kurtoglu, T., and Campbell, M. I., 2009, “Autom ated Synthesis of Electrome-
chanical Design Configurations From Empirical Analysis of Function to Form
Mapping,” J. Eng. Des.,20(1), pp. 83–104.
[63] Sridharan, P., and Campbell, M. I., 2005, “A Study on the Grammatical Con-
struction of Function Structures,” AI Edam,19(3), pp. 139–160.
[64] Campbell, M. I., 2009, “A Graph Grammar Methodology for Generative Sys-
tems,” University of Texas, Austin, TX, Technical Report No. 2009-001.
[65] Helms, B., and Shea, K., 2012, “Computational Synthesis of Product Architec-
tures Based on Object-Oriented Graph Grammars,” ASME J. Mech. Des.,
134(2), p. 021008.
[66] Qian, L., and Gero, J. S., 1996, “Function–Behavior–Structure Paths and Their
Role in Analogy-Based Design,” AI Edam,10(4), pp. 289–312.
[67] Bohm, M. R., Stone, R. B., and Szykman, S., 2005, “Enhancing Virtual Prod-
uct Representations for Advanced Design Repository Systems,” ASME J.
Comput. Inf. Sci. Eng.,5(4), pp. 360–372.
[68] McIntire, M. G., 2016, “From Functional Modeling to Optimization: Risk and
Safety in the Design Process for Large-Scale Systems,” Ph.D. thesis, Oregon
State University, Corvallis, OR.
[69] Keshavarzi, E., McIntire, M., Goebel, K., Tumer, I. Y., and Hoyle, C., 2017,
“Resilient System Design Using Cost-Risk Analysis With Functional Models,”
ASME Paper No. DETC2017-67952.
[70] Slater, M. R., and Van Bossuyt, D. L., 2015, “Toward a Dedicated Failure
Flow Arrestor Function Methodology,” ASME Paper No. DETC2015-46270.
[71] Short, A. R., and Van Bossuyt, D. L., 2015, “Active Mission Success Estima-
tion Through Phm-Informed Probabilistic Modelling,” Annual Conference of
the Prognostics and Health Management Society, Coronado, CA, Oct. 18–24,
Publication Control No. 051.
[72] Short, A.-R., Lai, A. D., and Van Bossuyt, D. L., 2018, “Conceptual Design of
Sacrificial Sub-Systems: Failure Flow Decision Functions,” Res. Eng. Des.,
29(1), pp. 23–38.
[73] Arlitt, R., Van Bossuyt, D. L., Stone, R. B., and Tumer, I. Y., 2017, “The
Function-Based Design for Sustainability Method,” ASME J. Mech. Des.,
139(4), p. 041102.
[74] Lynch, K., Ramsey, R., Ball, G., Schmit, M., and Collins, K., 2016,
“Ontology-Driven Metamodel Validation in Cyber-Physical Systems,” Infor-
mation Technology: New Generations, Springer, Cham, Switzerland, pp.
1255–1258.
[75] Sztipanovits, J., Bapty, T., Neema, S., Howard, L., and Jackson, E., 2014,
“Openmeta: A Model-and Component-Based Design Tool Chain for Cyber-
Physical Systems,” Joint European Conferences on Theory and Practice of
Software, Grenoble, France, Apr. 5–13, pp. 235–248.
[76] Simko, G., Lindecker, D., Levendovszky, T., Neema, S., and Sztipanovits, J., 2013,
“Specification of Cyber-Physical Components With Formal Semantics–Integration
and Composition,” International Conference on Model Driven Engineering Lan-
guages and Systems, Miami, FL, Sept. 29–Oct. 4, pp . 471–487.
[77] Henley, E. J., and Kumamoto, H., 1981, Reliability Engineering and Risk
Assessment, Vol. 568, Prentice Hall, Englewood Cliffs, NJ.
[78] O’Halloran, B. M., Haley, B., Jensen, D. C., Arlitt, R., Tumer, I. Y., and
Stone, R. B., 2014, “The Early Implementation of Failure Modes Into Existing
Component Model Libraries,” Res. Eng. Des.,25(3), pp. 203–221.
[79] Department of Defense, 1949, “Procedures for Performing a Failure Mode,
Effects and Criticality Analysis,” Department of Defense, Washington, DC,
Standard.
[80] Tumer, I., Barrientos, F., and Mehr, A. F., 2005, “Towards Risk Based Design
(RBD) of Space Exploration Missions: A Review of RBD Practice and
Research Trends at NASA,” ASME Paper No. DETC2005-85100.
[81] Stone, R. B., Tumer, I. Y., and Van Wie, M., 2005, “The Function-Failure
Design Method,” ASME J. Mech. Des.,127(3), pp. 397–407.
[82] Lough, K. G., Stone, R., and Tumer, I. Y., 2009, “The Risk in Early Design
Method,” J. Eng. Des.,20(2), pp. 155–173.
[83] IAEA, 1993, “Defining Initiating Events for Purpose of Probabilistic Safety
Assessment,” International Atomic Energy Agency, Vienna, Austria, Report
No. IAEA-TECDOC-719.
[84] Zamanali, J., 1998, “Probabilistic-Risk-Assessment Applications in the
Nuclear-Power Industry,” IEEE Trans. Reliab.,47(3), pp. SP361–SP364.
[85] Gilks, W. R., Richardson, S., and Spiegelhalter , D., 1995, Markov Chain
Monte Carlo in Practice, CRC Press, London.
[86] Gilks, W. R., 2005, “Markov Chain Monte Carlo,” Encyclopedia of Biostatis-
tics, Wiley, Chichester, UK.
[87] Brooks, S., Gelman, A., Jones, G., and Meng, X.-L., 2011, Handbook of Mar-
kov Chain Monte Carlo, CRC Press, Boca Raton, FL.
[88] David, P., Idasiak, V., and Kratz, F., 2010, “Reliability Study of Complex Physical
Systems Using SysML,” Reliab.Eng.Syst.Saf.,95(4), pp. 431–450.
[89] Beck, J. L., and Au, S.-K., 2002, “Bayesian Updating of Structural Models
and Reliability Using Markov Chain Monte Carlo Simulation,” J. Eng. Mech.,
128(4), pp. 380–391.
[90] Koutra, D., Parikh, A., Ramdas, A., and Xiang, J., 2011, “Algorithms for
Graph Similarity and Subgraph Matching,” Ecology Inference Conference,
Carnegie-Mellon-University, Pittsburgh, PA, Technical Report.
[91] Mehrpouyan, H., Haley, B., Dong, A., Tumer, I. Y., and Hoyle, C., 2015,
“Resiliency Analysis for Complex Engineered System Design,” AI Edam,
29(1), pp. 93–108.
[92] Haley, B. M., Dong, A., and Tumer, I. Y., 2016, “A Comparison of Network-
Based Metrics of Behavioral Degradation in Complex Engineered Systems,”
ASME J. Mech. Des.,138(12), p. 121405.
[93] Fu, K., Chan, J., Cagan, J., Kotovsky, K., Schunn, C., and Wood, K., 2013,
“The Meaning of ‘Near’ and ‘Far’: The Impact of Structuring Design Data-
bases and the Effect of Distance of Analogy on Design Output,” ASME J.
Mech. Des.,135(2), p. 021007.
[94] Poppa, K., Arlitt, R., and Stone, R., 2013, “An Approach to Automated Con-
cept Generation Through Latent Semantic Indexing,” IIE Annual Conference,
Institute of Industrial and Systems Engineers (IISE), San Juan, Puerto Rico,
May 18–22, p. 151.
[95] Clarke, E. M., Grumberg, O., and Peled, D., 1999, Model Checking, MIT
Press, Cambridge, MA.
[96] Ester, M., Kriegel, H.-P., Sander, J., and Xu, X., 1996, “A Density-Based
Algorithm for Discovering Clusters in Large Spatial Databases With Noise,”
Second International Conference on Knowledge Discovery and Data Mining,
Portland, OR, Aug. 2–4, pp. 226–231.
[97] Arnold, C. R. B., Stone, R. B., and McAdams, D. A., 2008, “Memic: An Inter-
active Morphological Matrix Tool for Automated Concept Generation,” IIE
Annual Conference, Institute of Industrial and Systems Engineers (IISE), Sin-
gapore, Dec. 8–11, p. 1196.
[98] Poll, S., Patterson-Hine, A., Camisa, J., Garcia, D., Hall, D., Lee, C.,
Mengshoel, O. J., Neukom, C., Nishikawa, D., Ossenfort, J., Sweet, A., Yentus,
S., Roychoudhury, I., Daigle, M., Biswas, G., and Koutsoukos, X., 2007,
“Advanced Diagnostics and Prognostics Testbed,” 18th International Workshop
on Principles of Diagnosis (DX-07), Nashville, TN, May 29–31, pp. 178–185.
[99] Keller, W., and Modarres, M., 2005, “A Historical Overview of Probabilistic
Risk Assessment Development and Its Use in the Nuclear Power Industry: A
Tribute to the Late Professor Norman Carl Rasmussen,” Reliab. Eng. Syst.
Saf.,89(3), pp. 271–285.
[100] Bly, M., 2011, Deepwater Horizon Accident Investigation Report, Diane Pub-
lishing, Washington, DC.
[101] Ramp, I. J., and Van Bossuyt, D. L., 2014, “Toward an Automated
Model-Based Geometric Method of Representing Function Failure Propagation
Across Uncoupled Systems,” ASME Paper No. IMECE2014-36514.
[102] Dekker, S., Cilliers, P., and Hofmeyr, J.-H., 2011, “The Complexity of Failure:
Implications of Complexity Theory for Safety Investigations,” Saf. Sci.,49(6),
pp. 939–945.
[103] Scott, M. J., and Antonsson, E. K., 1999, “Arrow ’s Theorem and Engineering
Design Decision Making,” Res. Eng. Des.,11(4), pp. 218–228.
[104] Kelly, D., and Smith, C., 2011, Bayesian Inference for Probabilistic Risk
Assessment: A Practitioner’s Guidebook, Springer Science and Business
Media, London.
[105] Van Bossuyt, D. L., O’Halloran, B. M., and Arlitt, R. M., 2018, “Irrational
System Behavior in a System of Systems,” IEEE 13th Annual Conference on
System of Systems Engineering (SoSE), Paris, France, June 19–22, pp.
343–349.
[106] Grunske, L., Kaiser, B., and Papado poulos, Y., 2005, “Model-Driven Safety
Evaluation With State-Event-Based Component Failure Annotations,” Interna-
tional Symposium on Component-Based Software Engineering, St Louis, MO,
May 14–15, pp. 33–48.
[107] O’Halloran, B. M., Papako nstantinou, N., Giammarco, K., and Van Bossuyt,
D. L., 2017, “A Graph Theory Approach to Functional Failure Propagation in
Early Complex Cyber-Physical Systems (CCPSs),” INCOSE International
Symposium, Adelaide, Australia, July 15–20, pp. 1734–1748.
[108] Shah, J. J., Smith, S. M., and Varg as-Hernandez, N., 2003, “Metrics for Meas-
uring Ideation Effectiveness,” Des. Stud.,24(2), pp. 111–134.
031001-10 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
... Table 2 Note that we have omitted cost and performance data from Table 2 and from the rest of the methodology for nominal functions. However in practice, this information is generally included in such databases and used to conduct trade-off studies when selecting specific components to solve functions [102,103]. We have omitted this information here to focus on the contributions of this method. ...
... The method is intended to be used in the system architecture phase of the systems engineering process where rapid trade-off studies and iterations of the proposed design are valued. There are other potential methods of validating the individual elements of the method although such validations have already been published [88,89,97,102,103]. Validation of the usefulness of alternative functional models is a design process is also in the literature [116]. ...
Conference Paper
Full-text available
Increasingly tight coupling and heavy connectedness in systems of systems (SoS) presents new problems for systems designers and engineers. While the failure of one system within a SoS may produce little collateral damage beyond a loss in SoS capability, a highly interconnected SoS can experience significant damage when one member system fails in an unanticipated way. It is therefore important to develop systems that are “good neighbors” with the other systems in a SoS by failing in ways that do not further degrade a SoS’s ability to complete its mission. This paper presents a method to (1) analyze a system for potential spurious emissions and (2) choose mitigation strategies that provide the best return on investment for the SoS. The method is suited for use during the system architecture phase of the system design process. A functional and flow approach to analyzing spurious emissions and developing mitigation strategies is used in the method. Use of the method may result in a system that causes less SoS damage during a failure event.
... For electronic consumer products such as digital cameras and mobile phones, appearance can be a significant factor to raise consumer satisfaction [13][14][15][16]. The information on product appearance, which conveys certain aesthetic features related to emotions or aesthetic design principles, is also essential in conceptual design [17][18][19][20][21]. ...
Article
The representation and archiving of design information play an essential role in product conceptual design by enabling the design teams to reuse existing solutions. However, most existing studies on information management of conceptual design focus mainly on the systematisation of function related design knowledge, while few studies are related to the aesthetic aspect of design information. The challenge lies in acquiring and quantifying the aesthetic information involved in conceptual design activities. It is also difficult to represent and archive the relationships and hierarchies of aesthetic information in conceptual design. In this context, this study proposes a method to facilitate the searching and retrieval of existing design solutions with effective representation and archiving of both functional and aesthetic information in conceptual design. Firstly, to address the challenge of acquiring and quantifying the aesthetic information, Kansei adjectives and aesthetic design principles were applied to quantify the aesthetic design information. Secondly, a model using knowledge-based information representation and an approach for archiving conceptual design information was proposed to represent and archive the aesthetic information in hierarchal relationships. The proposed approach was illustrated by a case study of digital camera design. A system prototype with a domain-driven vocabulary to represent and archive aesthetic information for digital camera design was successfully developed to show the feasibility of the proposed approach.
... Later, based on the functional dependency network analysis model [10], Guariniello et al. [11] used system operational dependency analysis (SODA) to consider the internal state of the design system and better accounted for stochasticity to improve the efficiency of functional failure analysis. Although the methods above are helpful to improve functional reliability and support decision-making [12] and propose two strategies of prevention and sacrifice to ensure the realization of system functions, the qualitative dependence between functions will cause state changes of different failure models, making it difficult to provide quantifiable risk data for the decision process of CSs. ...
Article
Full-text available
Reliability is a major performance index in the electromechanical product conceptual design decision process. As the function is the purpose of product design, the risk of scheme design is easy to be caused when there is a failure (i.e., function failure). However, existing reliability analysis models focus on the failure analysis of functions but ignore the quantitative risk assessment of conceptual schemes when function failures occur. In addition, design information with subjectivity and fuzziness is difficult to introduce the risk index into the early design stage for comprehensive decisions. To fill this gap, this paper proposes a conceptual scheme decision model for mechatronic products driven by the risk of function failure propagation. Firstly, the function structure model is used to construct the function fault propagation model, so as to obtain the influence degree of the subfunction failure. Secondly, the principle solution weight is calculated when the function failure is propagated, and the influence degree of the failure mode is integrated to obtain the severity of the failure mode on the product system. Thirdly, the risk value of failure mode is calculated by multiplying the severity and failure probability of failure mode, and the risk value of the scheme is obtained based on the influence relationship between failure modes. Finally, the VIKOR (Višekriterijumska Optimizacija i kompromisno Rešenje) method is used to make the optimal decision for the conceptual scheme, and then take the cutting speed regulating device scheme of shearer as an example to verify the effectiveness and feasibility of the proposed decision model.
... Traditionally, Probabilistic Risk Assessment has been performed by experts determining how each fault would affect the system (e.g. an FMEA), or which faults will lead to a given failure event (e.g. a Fault Tree). However, with the advent of computer-aided engineering, computational modelling approaches have been developed that account for limitations in traditional approaches [17] and enable novel design processes such as quick iteration [18] with automatically updating risk assessments [19] and re-use of models in re-design [20]. Engineering models additionally enable dynamic probabilistic risk assessment-the assessment of risks based on the dynamic re-sponse of the system model (which itself may be probabilistic) to faults [17,[21][22][23]. ...
Conference Paper
Resilience models assess a system's ability to withstand disruption by quantifying the value of metrics (e.g. expected cost or loss) over time. When such a metric is the result of injecting faults in a dynamic model over an interval of time, it is important that it represent the statistical expectation of fault responses rather than a single response. Since fault responses vary over fault injection times, representing the statistical expectation of responses requires sampling a number of points. However, fault models are often built around computationally expensive dynamic simulations, and it is desirable to be able to iterate over designs as quickly as possible to improve system resilience. With this in mind, this paper explores approaches to sample fault injection times to minimize computational cost while accurately representing the expectation of fault resilience metrics over the set possible occurrence times. Two general approaches are presented: an a priori approach that attempts to minimize error without knowing the underlying cost function, and an a poste-riori approach that minimizes error when the cost function is known. Among a priori methods, numerical integration minimizes error and computational time compared to Monte Carlo sampling, however both are prone to error when the metric's fault response curve is discontinuous. While a posteriori approaches can locate and correct for these discontinuities, the resulting error reduction is not robust to design changes that shift the underlying location of discontinuities. The ultimate decision to use an a priori or a posteriori approach to quantify resilience is thus dependent on a number of considerations, including computational cost, the robustness of the approximation to design changes, and the underlying form of the resilience function.
... In Yong's approach the product gene (i.e., the functional genetic information) is analogous to the DNA, the effect group (i.e. the working principle, for example electromagnetism) is analogous to the RNA, and the principle solution (i.e., the physical artifact) is analogous to the protein. Arlitt et al [35] have proposed a human in the loop GA approach to explore the modes of failure that may occur in the system while it is still at the conceptual synthesis phase. In their adaption of GA, they follow these steps: develop the function model, develop Inherent Behavior Function Model (IBFM) simulation, specify probabilities for risk assessment, automatically synthesize similar function models, validate these function models, run simulation on each function, iterate the best performing models until the performance metrics. ...
Preprint
Full-text available
The goal of this paper is to develop the groundwork for automated synthesis of function models. To this end, an evolutionary algorithm based framework has been developed. A parameterization method that can completely describe any given function models has been proposed. The parameterization makes the function models compatible for use within the evolutionary algorithm framework. Validation of the parameterization method is carried out by using an evolutionary algorithm to synthesize the function models for five different electro-mechanical products. The algorithm converged in each case, indicating that the method is satisfactory and that function models can actually be synthesized using an evolutionary framework. In addition, the adaptation of several a priori rules for use in this framework has been proposed. These rules are categorized as grammar, logical and feature based rules. An updated evolutionary framework that incorporates these rules is also presented.
... While many modelling approaches for this task have been presented (see: [11] [12], [13], [14], [15], [16]), little has been done to present the process to then incorporate this information in design. [17] presents a human-in-theloop approach in which the designer identifies failure paths using a visualization given by the fault propagation software. In previous work, we presented value-driven design tools to allow risks to be considered and weighted against performance in the design process to allow for resilience-based decision-making and optimization in the design process [18] [19] and we presented an overall optimization framework for use with these models [20]. ...
Conference Paper
Due to the expansive, time-consuming nature of risk analyses, it is important to be able to assign the minimization of risk (and, in general, optimization of resilience) to responsible teams that can work in parallel. However, while methods exist for minimization of risk in conventional design processes, research has not yet shown how it should be performed in a model-based design context in early design phase, when the design representation is relatively high-level and there are uncertainties in parameter values. This paper presents a value-driven design approach to minimize risk by decomposing the design, operational, and expected failure costs to individual functions in a system failure model. This process is demonstrated in a case study considering the redundancy of components to fulfill overall functions in an electric power system, where it is shown to increase design value significantly. An uncertainty-based process is additionally provided to enable the designer to test the sensitivity of the chosen design solution to uncertain parameter values. In this limited case study it is shown that the sensitivity of the choice to parameter value uncertainty is low, provided the range of uncertainty for each parameter is within a reasonable range. In situations like this, presented expected cost metrics provide meaningful information to justify system-architectural design decisions made on the basis of resilience.
Article
Full-text available
An open area of research for complex, cyber‐physical systems is how to adequately support decision making using reliability and failure data early in the systems engineering process. Having meaningful reliability and failure data available early offers information to decision makers at a point in the design process where decisions have a high impact to cost ratio. When applied to conceptual system design, widely used methods such as probabilistic risk analysis (PRA) and failure modes effects and criticality analysis (FMECA) are limited by the availability of data and often rely on detailed representations of the system. Further, existing methods for system reliability and failure methods have not addressed failure propagation in conceptual system design prior to selecting candidate architectures. Consideration given to failure propagation primarily focuses on the basic representation where failures propagate forward. In order to address the shortcomings of existing reliability and failure methods, this paper presents the function failure propagation potential methodology (FFPPM) to formalize the types of failure propagation and quantify failure propagation potential for complex, cyber‐physical systems during the conceptual stage of system design. Graph theory is leveraged to model and quantify the connectedness of the functional block diagram (FBD) to develop the metrics used in FFPPM. The FFPPM metrics include (i) the summation of the reachability matrix, (ii) the summation of the number of paths between nodes (i.e., functions) i and j for all i and j, and (iii) the degree and degree distribution. In plain English, these metrics quantify the reachability between functions in the graph, the number of paths between functions, and the connectedness of each node. The FFPPM metrics can then be used to make candidate architecture selection decisions and be used as early indicators for risk. The unique contribution of this research is to quantify failure propagation potential during conceptual system design of complex, cyber‐physical systems prior to selecting candidate architectures. FFPPM has been demonstrated using the example of an emergency core cooling system (ECCS) system in a pressurized water reactor (PWR).
Article
Full-text available
As component engineering has progressively advanced over the past 20 years to encompass a robust element of reliability, a paradigm shift has occurred in how complex systems fail. While failures used to be dominated by ‘component failures,’ failures are now governed by other factors such as environmental factors, integration capability, design quality, system complexity, built-in testability, etc. Of these factors, environmental factors are some of the most difficult to predict and assess. While test regimes typically encompass environmental factors, significant design changes to the system to mitigate any potential failures is not likely to occur due to the cost. The early stages of the systems engineering design process offer significant opportunity to evaluate and mitigate risks due to environmental factors. Systems that are expected to operate in a dynamic and changing environment have significant challenges for assessing environmental factors. For example, external failure initiating event probabilities may change with respect to time, and new discovered external initiating events may also be expected to have varying probabilities of occurrence with respect to time. While some industry standard methods such as Probabilistic Risk Assessment (PRA) [3] and Failure Modes and Effects Analysis (FMEA) [4] can partially address a time-dependent external initiating event probability, current methods of analyzing system failure risk during conceptual system design cannot. We have developed the Time Based Failure Flow Evaluator (TBFFE) to address the need for a risk analysis tool that can account for variable probabilities in initiating events over the duration of a system’s operation. This method builds upon the Function Based Engineering Design (FBED) [19] method of functional modeling and the Function Failure Identification and Propagation (FFIP) [9] failure analysis method that is compatible with FBED. Through the development of TBFFE, we have found that the method can provide significant insights into a design that is to be used in an environment with variable probability external initiating events. We present a case study of the conceptual design of a nuclear power plant’s spent fuel pool experiencing a variety of external initiating events that vary in probability based upon the time of year. The case study illustrates the capability of TBFFE by identifying how seasonally variable initiating event occurrences can impact the probability of failure on a monthly timescale that otherwise would not be seen on a yearly timescale. Changing the design helps to reduce the impact that time-varying initiating events have on the monthly risk of system failure.
Conference Paper
Full-text available
System of systems (SoS) failures can sometimes be traced to a system within the SoS behaving in unexpected ways. Due to their emergent complexity, these types of failures are notoriously challenging to foresee. This paper presents a method to aid in predicting unknown unknowns in a SoS. Irrationality initiators-failure flows emanating from one system that serve as unexpected initiating events in another system-are introduced into quantitative risk analysis methods such as the Failure Flow Identification and Propagation framework and Probabilistic Risk Assessment. Analysis of models built using this approach yield a probability distribution of failure paths through a system within the SoS that are initiated by unexpected behaviors of other systems within the SoS. The method is demonstrated using an example of an autonomous vehicle network operating in a partially denied environment with hostile forces present. Using the concept of irrationality initiators, it is possible to identify and prioritize vulnerabilities in the system of interest in the SoS.
Article
Full-text available
Prognostics and Health Management (PHM) systems are usually only considered and set up in the late stage of design or even during the system’s lifetime, after the major design decision have been made. However, considering the PHM system’s impact on the system failure probabilities can benefit the system design early on and subsequently reduce costs. The identification of failure paths in the early phases of engineering design can guide the designer toward a safer, more reliable and cost-efficient design. Several functional failure modeling methods have been developed recently. One of their advantages is to allow for risk assessment in the early stages of the design. Risk and reliability functional failure analysis methods currently developed do not explicitly model the PHM equipment used to identify and prevent potential system failures. This paper proposes a framework to optimize prognostic systems selection and positioning during the early stages of a complex system design. A Bayesian network, incorporating the PHM systems, is used to analyze the functional model and failure propagation. The algorithm developed within the proposed framework returns the optimized placement of PHM hardware in the complex system, allowing the designer to evaluate the need for system improvement. A design tool was developed to automatically apply the proposed method. A generic pressurized water nuclear reactor primary coolant loop system is used to present a case study illustrating the proposed framework. The results obtained for this particular case study demonstrate the promise of the method introduced in this paper. The case study notably exhibits how the proposed framework can be used to support engineering design teams in making better informed decisions early in the design phase.
Conference Paper
Full-text available
This paper presents a framework to quantify failure propagation potential for complex, cyber-physical systems (CCPSs) during the conceptual stages of design. This method is referred to as the Function Failure Propagation Potential Methodology (FFPPM). This research is motivated by recent trends in engineering design. As systems become increasingly connected, an open area of research for CCPSs is to move reliability and failure assessments earlier in the engineering design process. This allows practitioners to make decisions at a point in the design process where the decision has a high impact and a low cost. Standard methods are limited by the availability of data and often rely on detailed representations of the system. As such, they have not addressed failure propagation in the functional design prior to selecting candidate architectures. To develop the metrics, graph theory is used to model and quantify the connectedness of the functional block diagram (FBD). These metrics quantify (1) the summation of the reachability matrix and (2) the summation of the number of paths between nodes (functions within system models) i and j for all i and j. From a practical standpoint, these metrics quantify the reachability between functions in the graph and the number of paths between functions defines the failure propagation potential of that failure. The unique contribution of this research is to quantify failure propagation potential during conceptual design prior to selecting candidate architectures. The goal of these metrics is to produce derived system requirements, based on an analysis, that focus on minimizing the impact of failures.
Article
Full-text available
This paper presents a method to conceptually model sacrificing non-critical sub-systems, or components, in a failure scenario to protect critical system functionality through a functional failure modeling technique. Understanding the potential benefits and drawbacks of choosing how a failure is directed in a system away from critical sub-systems and toward sub-systems that can be sacrificed to maintain core functionality can help system designers to design systems that are more likely to complete primary mission objectives despite failure events. Functional modeling techniques are often used during the early stage of conceptual design for complex systems to provide a better understanding of system architecture. A family of methods exists that focuses on the modeling of failure initiation and propagation within a functional model of a system. Modeling failure flow provides an opportunity to understand system failure propagation and inform system design iteration for improved survivability and robustness. Currently, the ability to model failure flow decision-making is missing from the family of function failure and flow methodologies. The failure flow decision function (FFDF) methodology presented in this paper enables system designers to model failure flow decision-making problems where functions and flows that are critical to system operation are protected through the sacrifice of less critical functions and flow exports. The sacrifice of less critical system functions and flows allows for mission critical functionality to be preserved, leading to a higher rate of mission objective completion. An example of FFDF application in a physical design is a non-critical peripheral piece of electrical hardware being sacrificed during an electrical surge condition to protect critical electronics necessary for the core functionality of the system. In this paper, a case study of the FFDF method is presented based on a Sojourner class Mars Exploration Rover (MER) platform.
Conference Paper
Full-text available
As components engineering has progressively advanced over the past 20 years to encompass a robust element of reliability, a paradigm shift has occurred in how complex systems fail. While failures used to be dominated by ‘component failures,’ failures are now governed by other factors such as environmental factors, integration capability, design quality, system complexity, built in testability, etc. Of these factors, environmental factors are difficult to predict and assess. While test regimes typically encompass environmental factors, significant design changes to the system to mitigate any failures found is not likely to occur based on the cost. The early stages of the engineering design process offer a significant opportunity to evaluate and mitigate risks due to environmental factors. Systems that are expected to operate in a dynamic and changing environment have significant challenges for assessing environmental factors. For example, external failure initiating event probabilities will change with respect to time and new types of external initiating events can be expect with respect to time. While some of the well exercised methods such as Probabilistic Risk Assessment (PRA) [Error! Reference source not found.] and Failure Modes and Effects Analysis (FMEA) [Error! Reference source not found.] can partially address a time-dependent external initiating event probability, current methods of analyzing system failure risk during conceptual system design cannot. As a result, we present our efforts at developing a Time Based Failure Flow Evaluator (TBFFE). This method builds upon the Function Based Engineering Design (FBED) [Error! Reference source not found.] method of functional modeling and the Function Failure Identification and Propagation (FFIP) [Error! Reference source not found.] failure analysis method that is compatible with FBED. Through the development of TBFFE, we have found that it can provide significant insights into a design that is to be used in an environment with variable probability external initiating events and unique external initiating events. We present a case study of the conceptual design of a nuclear power plant's spent fuel pool undergoing a variety of external initiating events that vary in probability based upon the time of year. The case study illustrates the capability of TBFFE by identifying how seasonally variable initiating event occurrences can impact the probability of failure on a month timescale that otherwise would not be seen on a yearly timescale. Changing the design helps to reduce the impact that time-varying initiating events have on the monthly risk of system failure.
Book
Since their popularization in the 1990s, Markov chain Monte Carlo (MCMC) methods have revolutionized statistical computing and have had an especially profound impact on the practice of Bayesian statistics. Furthermore, MCMC methods have enabled the development and use of intricate models in an astonishing array of disciplines as diverse as fisheries science and economics. The wide-ranging practical importance of MCMC has sparked an expansive and deep investigation into fundamental Markov chain theory. The Handbook of Markov Chain Monte Carlo provides a reference for the broad audience of developers and users of MCMC methodology interested in keeping up with cutting-edge theory and applications. The first half of the book covers MCMC foundations, methodology, and algorithms. The second half considers the use of MCMC in a variety of practical applications including in educational research, astrophysics, brain imaging, ecology, and sociology. The in-depth introductory section of the book allows graduate students and practicing scientists new to MCMC to become thoroughly acquainted with the basic theory, algorithms, and applications. The book supplies detailed examples and case studies of realistic scientific problems presenting the diversity of methods used by the wide-ranging MCMC community. Those familiar with MCMC methods will find this book a useful refresher of current theory and recent developments.
Book
Do you… Use a computer to perform analysis or simulations in your daily work? Write short scripts or record macros to perform repetitive tasks? Need to integrate off-the-shelf software into your systems or require multiple applications to work together? Find yourself spending too much time working the kinks out of your code? Work with software engineers on a regular basis but have difficulty communicating or collaborating? If any of these sound familiar, then you may need a quick primer in the principles of software engineering. Nearly every engineer, regardless of field, will need to develop some form of software during their career. Without exposure to the challenges, processes, and limitations of software engineering, developing software can be a burdensome and inefficient chore. In What Every Engineer Should Know about Software Engineering, Phillip Laplante introduces the profession of software engineering along with a practical approach to understanding, designing, and building sound software based on solid principles. Using a unique question-and-answer format, this book addresses the issues and misperceptions that engineers need to understand in order to successfully work with software engineers, develop specifications for quality software, and learn the basics of the most common programming languages, development approaches, and paradigms.
Conference Paper
This paper presents a framework to compare the resiliency of different designs during the conceptual design, when information about implementation details is unavailable. We apply the Inherent Behavioral Functional Model (IBFM) tool to develop an initial functional model for a system and simulate the failure behavior. The simulated failure scenarios provide us the information on the unique failure propagation paths and the end state/final behavior of the system assigned to each failure. Each failure path is caused by injecting one or multiple simultaneous faults into the functional model. Within this framework, we generate a population of functional models from a baseline seed model, and evaluate its potential failure scenarios. We also develop a cost-risk model to compare resiliency of different designs, and produce a preference ranking. select the most resilient one, based upon the cost-risk objective. The risk is calculated based on the probability of having an undesired end state for each design, and a consequential cost is assigned to each failure to quantify the cost-risk for a given design. In this paper, we implement and demonstrate the proposed method on the design of a resilient mono-propellant system.