Ryan M. Arlitt
SUTD-MIT International Design Centre,
Singapore University of Technology and Design,
Douglas L. Van Bossuyt
Department of Systems Engineering,
Naval Postgraduate School,
Monterey, CA 93940
A Generative Human-in-the-Loop
Approach for Conceptual Design
Exploration Using Flow Failure
Frequency in Functional Models
A challenge systems engineers and designers face when applying system failure risk
assessment methods such as probabilistic risk assessment (PRA) during conceptual
design is their reliance on historical data and behavioral models. This paper presents a
framework for exploring a space of functional models using graph rewriting rules and a
qualitative failure simulation framework that presents information in an intuitive manner
for human-in-the-loop decision-making and human-guided design. An example is pre-
sented wherein a functional model of an electrical power system testbed is iteratively per-
turbed to generate alternatives. The alternative functional models suggest different
approaches to mitigating an emergent system failure vulnerability in the electrical power
system’s heat extraction capability. A preferred functional model conﬁguration that has a
desirable failure ﬂow distribution can then be identiﬁed. The method presented here helps
systems designers to better understand where failures propagate through systems and
guides modiﬁcation of systems functional models to adjust the way in which systems fail
to have more desirable characteristics. [DOI: 10.1115/1.4042913]
The design, manufacture, and deployment of complex systems
require extensive investment of personnel, resources, time, and
money to produce systems that meet requirements [1,2]. Schedule
and cost overruns are common on large systems such as aircraft,
spacecraft, power plants, ships, and other systems . A signiﬁ-
cant percentage of schedule and cost overruns, and reduced sys-
tems capabilities as compared to original requirement documents
can be traced back to architectural decisions made during the con-
ceptual phase of system design . Architectural decisions that
are made with incorrect or missing information, or that are made
with high degrees of uncertainty in the data can lead to incorrect
decisions being made that then leads to cost increases and sched-
ule slips . As a result, it is important that architectural decisions
are made with good, complete information to increase the likeli-
hood of systems being delivered on time, on budget, and meeting
Of particular interest to this research is how potential system
failures are assessed and acted upon during the conceptual phase
of system design. Common techniques of identifying failure risks
and then mitigating them such as failure mode and effects analysis
 and probabilistic risk assessment (PRA) [7,8] can miss emer-
gent system behaviors and, while some information is provided to
designers to aid in decision-making, little guidance is given on
speciﬁc ﬂow impacts due to failure events. Extensive work has
been done to understand failure paths from a component and/or
functional basis [9–16] but comparatively little effort has been
expended in looking at ﬂows of material, energy, and data through
systems, and how their disruption or failure can impact overall
Specific Contributions. This paper contributes a method to
identify functional models that have a desirable distribution of
ﬂow failure events across a large space of failure scenarios. The
method identiﬁes ﬂows that are most often associated with fail-
ure events and automatically explores a variety of potential alter-
native functional models to identify models that have lower ﬂow
failure concentrations. Visualizations of these alternatives are
presented to the user, allowing quick iteration of functional
architectures in the context of limited embodiment information.
This contribution arises from the combination of a generative
approach for building functional models and an evaluation
approach that qualitatively simulates the failure performance of
each functional model.
This work contributes a concept exploration method grounded
in the historical behaviors and failures of similar systems. This
section describes relevant past work upon which this method is
built, in areas including conceptual design, risk and reliability
analysis, and computational support for these activities.
Conceptual Design. Within the conceptual phase of design,
there are several distinct steps including (1) ideation, (2) early sys-
tem architecture studies, (3) and system modeling and trade stud-
ies . During the last step of conceptual design, high-level and
black box models produced in the previous step are reﬁned into
subsystem, functional, and component models . A variety of
modeling techniques and methods are commonly used to help
Present address: Department of Mechanical Engineering, Technical University
of Denmark, Lyngby DK-2800 Kgs, Denmark.
An earlier version of this article was presented at the ASME International
Design Engineering Technical Conferences and Computers and Information in
Engineering Conference, Quebec City, Quebec, Canada, Aug. 26–29, 2018, as Paper
Contributed by the Computers and Information Division of ASME for publication
in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript
received August 26, 2018; ﬁnal manuscript received February 14, 2019; published
online March 18, 2019. Assoc. Editor: Jitesh H. Panchal.
This work is in part a work of the U.S. Government. ASME disclaims all interest
in the U.S. Government’s contributions.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-1
C2019 by ASME
make informed decisions based on trade studies such as functional
models; risk, reliability, failure, availability, and robustness mod-
els; and other related modeling and assessment methods
[6,19–25]. These design decisions directly impact later subsystem
and component design, and if made incorrectly due to a lack of
information or a misunderstanding of the fundamental nature of
the system’s design, signiﬁcant rework and redesign costs can be
incurred [26,27]. Timely information on which to base design
decisions is critical for the delivery of an on-time and on-budget
system that performs as intended [20,25].
Functional and Flow Modeling. A number of modeling para-
digms exist to model systems during conceptual design [28,29].
Of particular interest to this research, functional and ﬂow methods
of modeling systems during the conceptual phase of design can be
used to help free engineers from component considerations and
allows more creativity with ﬁnding new system design solutions
. While there are many different functional and ﬂow taxono-
mies and grammars [30–60], this research uses the functional
basis for engineering design taxonomy  (herein referred to as
FB) to represent functions and ﬂows within systems. The FB tax-
onomy abstracts functions and ﬂows from the physical compo-
nents and transported material, energy, or data that they represent.
Of particular value is the potential for simulating abstract models
constructed using FB, which is possible so long as that model has
(1) topological consistency and (2) conservation of material and
The functional basis has several attributes that make it attrac-
tive in this context. First, its abstract mixture of human-
interpretable and physics-based language makes it suitable for
both simulation and feedback. Second, FB has large and growing
popularity in the design methodology community, as evidenced
by over 1200 citations on the FB original article . This is
important because (1) there is an existing body of work upon
which to build and (2) a critical mass of adoption is useful to
when model libraries are involved. However, many other func-
tional and ﬂow taxonomies and grammars may also be used.
Engineers and designers working on different projects and differ-
ent systems within different organizations may ﬁnd beneﬁts and
drawbacks to speciﬁc functional and ﬂow taxonomies and gram-
mars. Reviewing potential beneﬁts and drawbacks of functional
and ﬂow taxonomies and grammars is beyond the scope of this
Conceptual to Component Design. Grammar rules have been
developed to aid designers and automated design tools in identify-
ing conceptual design conﬁgurations that are likely to be realiz-
able in physical component design [62–64]. Helms and Shea 
prescribe a general approach for synthesis of product architectures
using the function behavior structure framework . This model
supports synthesis of component architectures from a functional
model, and makes explicit the need for simulation and evaluation
to close the synthesis loop. Similarly, Kurtoglu and Campbell
developed grammar rules to convert functional models into
component-level conﬁguration ﬂow graphs . More speciﬁc to
the domain of functional architectures, Sridharan and Campbell
 generated 69 grammar rules from 30 products located in the
design repository  to create a framework for generating func-
It should be noted that there is signiﬁcant heterogeneity of mod-
eling languages in which grammars are implemented. For this
research, the selection of the FB modeling taxonomy was inten-
tional. Not only is FB a functional description with high general-
ity, but there exist several computational tools for evaluating FB
models which is required to close the computational design syn-
thesis loop. The recent development of several simulation
approaches to evaluate failures in functional models [10,11,68]
enables a new generative design loop for examining reliability of
Decision Support Tools. Evaluation methods and decision
support tools have been developed to aid systems designers to
make conceptual architectural decisions. These methods and
tools can be broadly categorized as: simulation-function,
simulation-component, expert knowledge and experience, and
historical function/component. A high-level review of tools
useful for failure analysis and related analysis techniques that
fall within the four categories listed previously is provided
Simulation-Function. Within the simulation-function cate-
gory, the function failure identiﬁcation and propagation (FFIP)
method and related ﬂow state logic method identify potential
failure ﬂow pathways through a functional model [9,10]. In the
context of this work, failure ﬂow is deﬁned as a ﬂow that either
is unexpectedly present or a ﬂow that is unexpectedly absent.
The inherent behavioral in functional models (IBFM) framework
extends FFIP to include the ability to generate multiple func-
tional models to drive toward a solution that can balance the cost
and risk of a system, and a pseudo time-step [16,68,69]. A num-
ber of other risk and failure analysis tools have been developed
from FFIP including the uncoupled failure ﬂow state reasoner
[11,70], a method of building prognostic systems in response to
failure modeling , and other related methods and tools
[13,14,71–73]. Several tools for ontology-driven metamodeling
and early conceptual design down-selection were produced as
part of the Defense Advanced Research Program Agency
(DARPA) adaptive vehicle make project [74–76]. While these
methods are useful for identifying and understanding failure
sources within a system, they generally lack the ability to iden-
tify speciﬁc ﬂow paths that are more often implicated in potential
system failure events.
Simulation-Component. Several simulation-component meth-
ods exist including the reliability block diagram method 
widely used in industry and a method developed by O’Halloran
et al. that simulates component performance at varying levels of
ﬁdelity based on model ﬁdelity . While these types of
methods are useful for understanding reliability of a system and
O’Halloran’s method is useful for simulating expected system
performance, both rely upon historical data. This limits the ability
of this class of method to identify emergent system behaviors.
Further, little guidance is provided by the results of these methods
to identify speciﬁc ﬂows within the system that are at higher risk
Expert Knowledge. Expert knowledge and experience play a
large role in several methods that are important to industry. Fail-
ure mode and effects analysis  and the related failure modes,
effects, and criticality analysis  use expert knowledge and sys-
tem experience to identify and understand potential failure scenar-
ios within a proposed system. Expert elicitation is often used in
producing fever charts and other graphical representations of risk
within a system . Expert knowledge and experience methods
in general do not adequately capture potential emergent system
behaviors—especially complex failure events.
Historical Function/Component. Several methods have
examined the link between historical performance of functions
and components, and their expected behavior in new systems. The
function failure design method  provides a matrix-based
approach to linking a function to potential component solution
failure modes. The risk in early design method  connects his-
torical risk information to ongoing design efforts and provides a
fever chart view for ease of understanding by novice risk analysts.
While these methods do well at identifying historical failure infor-
mation on a functional level, they do not adequately uncover
emergent system behaviors.
031001-2 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
Risk and Failure Analysis. Many other methods of failure and
risk analysis exist that can help system designers to make risk and
failure-informed architectural decisions during conceptual design.
PRA combines fault tree analysis and event tree analysis [7,8]
with an analysis of potential initiating events that can lead to
failure . The nuclear industry heavily uses PRA to identify
potential emergent system behaviors and ensure safety of nuclear
power plants . A popular method of identifying potential fail-
ures uses Markov chains that are built to model state transitions in
a system where probabilities of state transitions are known or can
be assumed. The Markov chains are then randomly walked using
Monte Carlo sampling to determine the probability of being in
each state [85–88]. The Markov chain Monte Carlo sampling
approach is especially applicable in the PRA context (e.g.,
Ref. ) because of its relative efﬁciency of approximating
Bayesian posteriors. The method presented in this paper differs in
that the failure simulation is deterministic for a large set of differ-
ent state spaces. Repetition of this simulation on single functional
model occurs only by sampling from different combinations of
initiating failure events.
Relevant Simulation Details. Given that the method presented
in this paper is intended to facilitate exploration over a population
of graphs, some heuristics are necessary to combat combinatorial
explosion. Subsampling a representative space achieves this goal,
but requires a method to calculate graph similarity prior to evalua-
tion. Graph similarity algorithms can be classiﬁed as edit distance,
feature extraction, and iterative . Feature extraction is selected
here due to simplicity of implementation, speed of evaluation, and
existing evidence for a correlation between graph-level features
(e.g., diameter and node degree) and system-level reliability (e.g.,
Refs.  and ). Additionally, the bag-of-functions feature
approach has been successfully used to measure similarity
between functional models [93,94].
In the area of software debugging with model checking, one
common strategy is to validate an abstraction of values, states,
and transitions . This type of model checking is in many ways
analogous to the approach presented in this paper. While both exe-
cute abstractions of the system to search for issues, the method
presented in this paper combines a formalism for abstracting and
simulating complex systems with a means to search the design
In summary, the conceptual phase of the systems engineering
design process provides systems designers with an opportunity to
make signiﬁcant architectural decisions that can drastically impact
the outcome of the design process and the performance of the sys-
tem. A variety of tools and methods are available to help support
engineers in making informed decisions during the conceptual
phase. Many such tools and methods rely on functional modeling
techniques and a number of methods exist to analyze failure
within this context. However, none of the existing methods sur-
veyed is able to directly assess failures from a ﬂow perspective
over a space of related functional models and use that information
to help make architectural decisions.
The method presented below is speciﬁcally intended for use
during the conceptual phase of design when architectural deci-
sions are being made and the design has not been ﬁnalized. The
method’s inputs include a single functional model from the user, a
library of IBFM [15,68,69] simulation components, and (option-
ally) a speciﬁcation of each IBFM state’s probability to serve as
an initiating failure event. The method’s output is a visualization
of several alternative functional models and the vulnerability of
each ﬂow therein to failures. Figure 1graphically depicts the
method. The ﬁrst three steps are preparatory steps to develop a
functional model of the system, develop the IBFM simulation,
and specify probabilities of failures. The next ﬁve steps are the
core of the methodology where new functional models are
automatically generated, validated, simulated, evaluated, and then
the process is iterated to create additional new child populations
of functional models. The ﬁnal step in the methodology is used to
select the most desirable functional model to proceed forward
with in the design process.
Develop Functional Model. The ﬁrst step is for the designer to
create a functional model for the system of interest. This model
takes the form of a directed graph where nodes are typed accord-
ing to the functions they represent and edges are typed according
to ﬂows the ﬂows they represent. This model will be used as a
seed to begin the process of analyzing failure ﬂows.
Develop Inherent Behavioral in Functional Models Simula-
tion. Given a seed model, an IBFM simulation is prepared .
This simulation must capture the designer’s abstract knowledge
about the system. This includes the following:
(1) Functions, including the operational modes and mode tran-
sition conditions applicable to each.
(3) Modes and the associated ﬂow behaviors associated with
Fig. 1 The method presented here includes nine distinct
steps, as shown in this graphic
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-3
(4) Conditions and the ﬂow state behavior associated with trig-
Given these elements, IBFM enables qualitative simulation of
the functional model. More details about IBFM can be found in
Specify Probabilities. The method presented in this paper can
be performed with either internal initiating events caused by failed
modes of functions within the system or by external events that
occur outside of the system boundary and propagate into the sys-
tem as failure ﬂows. The case study below uses internal initiating
events as a demonstration.
For internal initiating events, each failed mode of each function
is treated as equally likely to occur as the default approach.
However, if a probability of occurrence is known for an internal
initiating event, then that probability is used instead. With exter-
nal initiating events, the authors recommend only using probabil-
ities that are grounded in reality and are realistic. When not using
probabilities speciﬁc to a function’s failed state, the frequency of
occurrence of failure ﬂows associated with each ﬂow can be ascer-
tained on a normalized basis. With speciﬁc probabilities available,
these frequencies can be weighted according to their expected
Automatically Generate Similar Functional Models. Using
the designer’s functional model as a seed automatically generates
locally similar functional models according to a limited set of
graph grammar rules (e.g., Table 1). These grammars perturb the
model by removing functions and by re-inserting functions that
are already present—new functionality is not added. The result is
a means to generate different functional architectures while pre-
serving the gist of the design intent. These grammars must be
capable of both adding and removing elements, and must conform
to topological consistency and conservation rules for FB.
Validate Automatically Generated Functional Models. For a
functional model to be simulatable, two main requirements must
be met: (1) conservation of mass and energy, and (2) each func-
tion’s inputs and outputs must be consistent with established
semantics . This can be done at generation time through care-
ful construction of grammars, or naively by iteratively discarding
noncompliant models and then generating replacements. Active
model checking requires software that captures the two
requirements—like that developed in Ref. .
Run Simulation on Each Functional Model. Next, each
model in the population is simulated using IBFM. By default, an
IBFM experiment runs simulations using every possible failure
state as an initiating event. Scenarios are then run for all paired
combinations of simultaneous initiating events, and the number of
simultaneous events increases until a prescribed cutoff. The
failure rate of each ﬂow in the model is captured as described in
Depending on the available computing power, this simulation
can be repeated with valid combinations of multiple initiating
events. While here it is recommended to characterize each model
according to its most vulnerable edge max(F), other performance
measures can be used (e.g., the mean and variance of the edge
failure frequency distribution).
Iterate Best Performing Models. Iteration consists of two
steps: (1) selecting a parent population and (2) generating a child
A diverse parent population of models is sampled from this local
space using roulette wheel selection (with replacement) and a per-
formance measure that linearly combines resiliency R(deﬁned as
the ability of the system to continue to function in spite of failure
events occurring) and uniqueness U(as proposed in Eq. (1)). A
model’s resiliency Ris normalized to the maximum resiliency in
the population R
. A model’s uniqueness Ucan be quan-
tiﬁed by applying a clustering algorithm such as density-based spa-
tial clustering of applications with noise  and then taking the
inverse of the number of total models in that model’s cluster. A full
pairwise distance matrix between models is needed to support this
clustering and can be generated from the graph feature representa-
tion using cosine distance. A weighting factor kbetween 0 and 1
captures preference for resiliency versus uniqueness
Next, a child population is generated by applying one randomly
selected grammar rule to each parent in a randomly selected loca-
tion. If a branching factor greater than 1 is applied, the process
closely resembles breadth ﬁrst tree search. If so, pruning the child
Table 1 Naive generative grammar language
Rule Recognize Apply
Add parallel path Any two edges on the graph with a valid connecting path Add a parallel copy of the shortest path between those
Add parallel subgraph Any two edges on the graph with a valid connecting path Perform “Add Parallel Path” for all paths in between those
edges. Propagate copy forward and backward to satisfy
conservation of mass and energy.
Add series Any function Insert a copy of function in series connected by function’s
own ﬂow type.
Remove node Any function Remove that function and connected ﬂows. Repeat on
nodes that fail a validation check until model is valid or
Algorithm 1 Functional model population simulation process
1: for each model Min the population do
2: Initialize a zero vector of failure counts Fto capture the failure fre-
quencies of all ﬂow edges in the model
3: Generate a list of scenarios Scontaining initiating events and their
4: for each speciﬁed scenario Sdo
5: Simulate Munder conditions of Suntil the model reaches steady-
6: for each failed edge in the resulting Mdo
7: Increment its total failure count in F, normalized by the proba-
bility of the initiating event
8: end for
9: end for
10: Take max(F) to describe this model’s resiliency
11: end for
031001-4 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
population back to the initial population size after simulation miti-
gates combinatorial explosion of IBFM simulations. This process
is visualized in Fig. 2.
Stop Iteration After Performance Metrics Have Been Met.
The steps of generating models, simulating their performance, and
iterating are repeated until stopping criteria are met.
Two parameters capture the stopping criteria: The ﬁrst dictates
an acceptable level of uniqueness Uspeciﬁed by the user. The
second dictates a performance threshold (in this case model resil-
iency Ris quantiﬁed by the model’s most vulnerable edge). When
there exists a set of Nmodels (where Nis user-speciﬁed) in the
most recent generation where all Nmodels exceed the perform-
ance threshold and the uniqueness threshold, the process stops.
Given that the population size is held constant, it is feasible to
quantify the uniqueness of each model via clustering on the full
pairwise comparison matrix using vector space similarity meas-
ures (e.g., cosine similarity) and the child’s lineage. An alternative
approach for large populations of constant size halts the search
when the explained variance ratio of the principal component
analysis of the data set’s feature representation dips below a given
Assess Final Population of Functional Models. After the
stopping criteria are met, a subset of models is selected from the
full history of generated models. These models are selected to
possess (1) high or low resiliency as desired and (2) high unique-
ness with respect to each other. Because the relationship between
a functional model and its performance in simulation is ambigu-
ous, it is valuable to allow the user to decide between a variety of
local optima. The purpose of using uniqueness to draw a ﬁnal pop-
ulation of models is to provide samples from different localities in
the ﬁnal results, as these are more likely to solve the problem in
meaningfully different ways. This enables a user to choose the
best architecture strategy to ﬁt the design context.
When visualizing the resulting models, the rates at which ﬂows
failed are indicated by both thickness and color of the edges. The
functionality to show both good and bad examples is motivated by
conceptual design exploration tools like morphological evaluation
machine and interaction conceptualizer , which provides crea-
tive stimulus by showing both highly common and highly uncom-
mon component conﬁgurations to match a given functional
model. Given this stimulus, the designer can assess which topol-
ogy to pursue and iterate upon, or draw inspiration to make tweaks
to the functional model.
Any number of methods can be used for determining unique-
ness U, though all but the most naive will rely on some means of
clustering the ﬁnal population. This may include straightforward
clustering (e.g., k-means), projection of the bag-of-features repre-
sentation into lower dimensions (e.g., principle component analy-
sis), or sampling from far-apart sections of the search tree
according to each model’s lineage.
This section contains an illustrative case study based on a real-
world system to demonstrate the workings of the method pre-
sented previously. It should be noted that the example, while
based on a real, physically embodied system that has signiﬁcant
heritage and pedigree as a research platform and is relevant to real
hardware ﬂown on the space shuttle and future American crewed
spacecraft, has been intentionally ﬁctionalized. In speciﬁc, the
functional model has been simpliﬁed and the failure results, while
representative, are not exhaustive. No claim to accuracy is made.
The results of the case study are illustrative of the method’s
capabilities but cannot be taken as evidence of how to design the
speciﬁc system presented below without further expansion, reﬁne-
ment, and veriﬁcation of the analysis. The authors explicitly state
that no real-world design decisions should be made using the
information presented here without doing an appropriate, com-
plete, and sufﬁciently detailed analysis. The case study presented
here is for demonstration purposes only.
The following case study demonstrates the mechanism of the
method on a simpliﬁed functional model of the advanced diagnos-
tics and prognostics testbed electrical power system testbed 
which was designed to be analogous to power systems found on
the space shuttle and future crewed American spacecraft. Various
model descriptions of this system have been used in prior work to
demonstrate failure simulation in conceptual design for FFIP 
and IBFM . In general, the model consists of a battery, an
inverter, and three loads—a fan, a pump, and an indicator light.
The model also contains a switch and several breakers. The func-
tionality of this system—which is used as a seed model—is cap-
tured in Fig. 3. The remainder of this section will address the
question, “in what ways might we redesign the functional archi-
tecture of this system to improve system reliability?”
For this example, the IBFM simulation is speciﬁed as in
Ref. , and failure mode probabilities are assumed to be
equal—analogous to a noninformative prior.
After specifying the seed model to deﬁne the local search space,
alternatives are iteratively generated. To facilitate this example, a
simple set of grammar rules is shown in Table 1. A much more
comprehensive and data-driven graph rewriting language for func-
tional models of electromechanical products was presented in
Ref. . Figure 3shows an application of the rule “add parallel
subgraph” between two randomly selected edges, indicated by the
dashed lines. The backbone of the inserted subgraph is shown via
the same dashed lines. Additional nodes and edges are added to
this new subgraph until the resulting model adheres to conserva-
tion of mass and energy. These additional components are indi-
cated with long dashed lines.
This process is repeated to generate a population of randomly
perturbed models in the local design space. Next, each model in
the population is simulated using IBFM, and a score is calculated
for the performance of each model. Snippets of two failure heat
maps for two generated concepts are shown in Figs. 4and 5.
These snippets capture the ﬂows with the highest failure rate in
each model. While the model in Fig. 4would be characterized by
its highest ﬂow failure rate of 50, the model in Fig. 5would be
Fig. 2 Visualization of roulette wheel sampling with branching
factor of 1. Generated models expand outward into the search
space toward local regions that are potentially interesting (as
opposed to optimal). Higher ﬁtness is represented as light, and
lower ﬁtness as dark. When the search concludes, results are
selected for presentation to the user with respect to perform-
ance and global uniqueness.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-5
quantiﬁed according to its (comparatively better) worst-case ﬂow
failure rate of 35. It should be noted that while this case study
uses low failure rate, medium failure rate, and high failure rate as
generic terms, a real-world analysis performed using the method
would set these terms to numeric values that are appropriate to the
speciﬁc system being analyzed and the customer.
Next, candidates from the current population are selected for iter-
ation according to performance and uniqueness, as illustrated in
Fig. 2. While Fig. 4has poor performance, it may still have a high
probability of selection if it is extremely different from the rest of
the current population. After selection, the next generation is itera-
tively resampled and created until the stopping criteria are met.
Ultimately, a series of varied heat maps as shown in Figs. 4
and 5are presented to the user. Based on the model in Fig. 4,a
user may realize that they need to pursue alternative functions for
cooling the inverter function, while the model in Fig. 5may per-
suade the user to investigate adding parallel cooling functionality.
The method presented in this paper contains several beneﬁts for
practitioners as well as a few open questions on the philosophy of
failure events. This section discusses the beneﬁts and open ques-
tions of the method.
A signiﬁcant beneﬁt of the method is the ability for systems
engineers to identify functional models that conform to desired
ﬂow failure concentration levels. The systems engineer can drive
model iteration toward either a highly concentrated ﬂow failure
paradigm or a distributed ﬂow failure paradigm. While the case
study mentioned previously demonstrates evolving a model
toward a solution that distributes failure ﬂow concentrations
across the model by adding in redundancy, speciﬁc system design
considerations may warrant concentrating failed ﬂows into a few
speciﬁc ﬂows. Concentrating failure ﬂow into a few ﬂows may be
beneﬁcial, for instance, if systems engineers are including sacriﬁ-
cial subsystems . In other situations, it may be beneﬁcial to
spread out failure ﬂows across several redundant subsystems .
No other method that the authors are aware of provides practi-
tioners with the ability to easily understand what ﬂow paths fail-
ures preferentially follow as the model changes. As compared to
standard IBFM, this generative method provides insights into how
the distribution of emergent failures changes with subtle shifts in
functional architecture. Additionally, most other function-and-
ﬂow-based methods of failure and risk analysis used during the
conceptual phase of system design are focused on failure of func-
tions. Examining the ﬂows rather than the functions can provide
new insights into which ﬂows are the most likely to be implicated
in failure events. This in turn can lead to systematic design efforts
to mitigate those speciﬁc failure ﬂows.
A beneﬁt of the heat mapping of failure ﬂow concentrations is
that emergent failure ﬂow behaviors that otherwise would be
missed can be examined by systems engineers. This may provide
new insights into emergent system behavior that otherwise would
not be available. Emergent system behavior is a signiﬁcant con-
cern in complex systems and has been implicated in several past
noteworthy failures [100–102].
It should be noted that this is a stochastic design space search
method with a loose deﬁnition of optimality. Because the goal of
this method is to facilitate human-in-the-loop exploration of sys-
tem concepts, Pareto optimality (as a function of performance and
uniqueness) is useful only as an approximation. Uniqueness in
particular depends on contextual factors including the designer’s
preferences and the other models in the population. Further,
designers should be aware of the limitations of arrow’s theorem
with respect to multivariable optimization, especially with
human-guided preferences .
Changing the underlying probabilities of the functions failing
results in changed failure ﬂow path likelihoods. This is in line
Fig. 3 Functional model of electrical power system
031001-6 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
with how “cut set” probabilities in PRA change when the basic
event failure probabilities are modiﬁed. It may be useful for prac-
titioners to perform sensitivity studies by varying probabilities in
the models to determine if speciﬁc failure ﬂow paths have consis-
tently high failure rates. In such a scenario, the failure ﬂow paths
with consistently high failure rates likely should be mitigated to
reduce the failure rate to a more acceptable level.
Predictive methods based on behavior models are sensitive to
modeling assumptions, while methods based on historical data are
sensitive to the particular characteristics and operating conditions
in which historical data is collected. While many components,
subsystems, and systems are similar to those previously con-
structed with respect to idealized behaviors and failure probabil-
ity, true novelty may cause such assumptions to be invalid.
However, PRA and other risk and failure methods are largely
underpinned on the concept that historical failure information is a
valid data source . Another potential issue is that historical
data will only include information on what has happened. This
may cause speciﬁc failures that have not been observed but that
could occur to be missed in the analysis . The authors advise
practitioners to carefully examine if a truly novel system, subsys-
tem, or component is being included in the models and if so, addi-
tional work to determine realistic behavior modes and failure
probabilities is warranted. Further, if sufﬁcient historical data are
not available to assuage the practitioner that all likely failures
have been observed, additional work in identifying potential fail-
ures may be necessary.
While this article deﬁnes failure ﬂow as a ﬂow that either is
unexpectedly present or a ﬂow that is unexpectedly absent ,
there are a variety of other related deﬁnitions of failure ﬂow and
the concept of a failure moving along ﬂow paths through the func-
tional model of a system. For instance, failure ﬂow can be deﬁned
in the context of a failure moving between components or func-
tions . Failure ﬂow can also be deﬁned as there being too
Fig. 4 A snippet heat map of a model with poor performance. The fan module fails in
many scenarios, indicated as a high failure rate in the ﬂows related to cooling the
inverter. In some cases, the failure propagates to the ﬂows related to the inverter,
which increases the failure rate of those ﬂows.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-7
high or too low of a ﬂow , as a transient non-nominal condi-
tion in a ﬂow that causes a steady-state failure in a function
[14,16], as the reversal of a ﬂow , or as a failure that jumps
between functions without following a nominal ﬂow path . A
more expansive deﬁnition of failure ﬂow may be useful in expand-
ing the capabilities of the method presented in this article. How-
ever, including more types of failure ﬂows may signiﬁcantly
increase the complexity of the simulations and preparatory work
which may lessen the usefulness of the method as an ideation tool
during conceptual system design. Identifying how to strike a mid-
dle ground between a restrictive deﬁnition and an expansive deﬁ-
nition of failure ﬂow may be a fruitful area of future research.
Validating the results of the method presented in this article is
an important step that must be performed by a human. The
method has been designed with the expectation that a human is
included in the loop of iterating upon and evaluating new func-
tional models, in order to validate that those models are reasona-
ble in the context of the system being analyzed. While it may be
possible to fully automate the method with a sufﬁciently robust
model library and extensive graph grammars, achieving a sufﬁ-
ciently high level of accuracy to support full automation may be
overly burdensome on the practitioner. This method is meant to
be used in conceptual system design when rapid analysis and
design iterations are desirable. Other potential methods of
partially validating the underlying method exist, although such
validations have already been conducted in in the literature. For
instance, the simulation may be validated but the simulation
approach (IBFM) is a separate work that has already been pub-
lished [15,68]. Validation of whether alternative functional mod-
els are useful in ideation is already widely accepted in the
literature as well . While it would be possible to validate
whether alternative functional models generated from a library of
behavior simulations are useful for ideation, such an approach
requires a signiﬁcant and robust model library that does not cur-
rently exist. Creation of a robust model library would likely only
be useful for a speciﬁc class of systems and would not guarantee
validity of the method for other systems. For these reasons, the
authors recommend that a human remain in the loop to provide a
“sanity check” on ideated functional models, to perform selection
of the best models to be further iterated, and to determine when to
terminate the iteration loop.
One open area of research on the method presented previously
is what happens in the case where two models are simulated
where one has no redundancies and the other has many parallel
redundancies. IBFM currently does not heavily penalize the cost
of adding new nodes. It may be desirable to adjust the penalty
function parameters for adding redundancies to a system model to
assist in the trade-off between the costs associated with adding
redundancy and the beneﬁts of added redundancy to mitigate
potential failures. However, systems engineers must consider if
parallel ﬂow redundancy provides true beneﬁt in stopping a failure
ﬂow before the ﬂow leads to system failure, or if redundant ﬂows
merely provide alternative pathways to system failure as in the
case of a drop in electrical voltage propagating through redundant
power feeds in a data center. In the data center’s case, had the
energy ﬂows been truly independent and redundant, a failure ﬂow
caused by a voltage drop on one of the power lines coming into
the facility likely would not have impacted the other redundant
power lines and electrical distribution systems in the facility.
An area of future work is to combine the concept of “cut sets”
derived from PRA and used in some FFIP-based methods with the
vulnerability of each type of ﬂow, redundant subsystems, and
comparing different models with global metrics (e.g., ratio of
failed ﬂows per model). Further reﬁning the IBFM’s method of
optimization within the context of the method presented in this
paper is expected to be a useful area of further research.
Another potential area of future work centers on how probabil-
ities are calculated and assigned to the functional model. While
this research assigns informative priors (i.e., probability distribu-
tions that are grounded in empirical data of past failures on the
same or similar components or functions), it may be useful to look
at noninformative priors (i.e., probability distributions that are
uniform for all functions or components and have no historical
knowledge of component or function performance).
While many PRA methods are by deﬁnition concerned with
both the likelihood and consequence of failures, the approach in
Fig. 5 A snippet heat map of a model with medium performance. In this case, grammar rules have added an additional sub-
graph for exporting material, which led to a reduced rate of failure in the associated ﬂows.
031001-8 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME
the paper addresses only likelihood. Because of the high level of
abstraction of functional models, and the necessity of using con-
textual information to assess the consequences of a failure, evalu-
ating failure severity is purposely left to the user. The challenge
of capturing context and failure consequences is deferred to future
The framework presented in this paper represents a way to gen-
eratively explore a space of functional models, assess their vulner-
ability to failure, and present a designer with a variety of
alternative options. The approach is human-in-the-loop; the
designer must interpret the results according to the speciﬁc con-
text of the problem. Given a library of IBFM models and a graph
rewriting language for perturbing functional models, this approach
enables a designer to make quick risk-of-failure-informed-deci-
sions about functional architectures. These decisions are founded
not on only experience or historical data, but on (1) qualitative
simulation of potential failure propagation and (2) a set of
solutions automatically generated to mitigate those failures. This
allows systems designers to make large system architectural deci-
sions very early in the conceptual design process where the cost
of making decisions and signiﬁcantly changing the design is rela-
tively inexpensive both in cost and in schedule time.
This research is partially supported by the Naval Postgraduate
School, the Singapore University of Technology and Design, the
Technical University of Denmark, and Oregon State University.
The authors have previously been or are currently employed by
the aforementioned institutions during the development of this
 Wald en, D. D., Roedler, G. J., Forsberg, K., Hamelin, R. D., and Shortell, T.
M., 2015, Systems Engineering Handbook: A Guide for System Life Cycle
Processes and Activities, Wiley, Hoboken, NJ.
 Ullman, D. G., 2015, The Mechanical Design Process, McGraw-Hil l Science/
Engineering/Math, New York.
 Browning, T. R., and Eppinger, S. D., 2002, “Modeling Impacts of Process
Architecture on Cost and Schedule Risk in Product Development,” IEEE
Trans. Eng. Manage.,49(4), pp. 428–442.
 Browning, T. R., 1998, “Sources of Schedule Risk in Complex System Devel-
opment,” Eighth Annual International Symposium of INCOSE, Vancouver,
BC, Canada, July 26–30, pp. 129–142.
 Wang, J. X., and Roush, M. L., 2000, What Every Engineer Should Know
About Risk Engineering and Management, CRC Press, New York.
 Stamatis, D. H., 2003, Failure Mode and Effect Analysis: FMEA From Theo ry
to Execution, ASQ Quality Press, Milwaukie, WI.
 Ericson, C., 2005, “Event Tree Analysis,” Hazard Analys is Techniques for
System Safety, Wiley, Hoboken, NJ, pp. 223–234.
 Ericson, C. A., 2015, “Fault Tree Analysis,” Hazard Analysis Techniques Sys-
tem Safety, Wiley, Hoboken, NJ, pp. 183–221.
 Kurtoglu, T., Tumer, I. Y., and Jensen, D. C., 2010, “A Functional Failure
Reasoning Methodology for Evaluation of Conceptual System Architectures,”
Res. Eng. Des.,21(4), pp. 209–234.
 Jensen, D., Tumer, I. Y., and Kurtoglu, T., 2009, “Flow State Logic (FSL) for
Analysis of Failure Propagation in Early Design,” ASME Paper No.
 O’Halloran, B. M., Papakonstantinou, N., and Van Bossuyt, D. L., 2015,
“Modeling of Function Failure Propagation Across Uncoupled Systems,” Reli-
ability and Maintainability Symposium (RAMS), Palm Harbor, FL, Jan.
26–29, pp. 1–6.
 L’her, G., Van Bossuyt, D. L., and O’Halloran, B. M., 2017, “Prognostic Sys-
tems Representation in a Function-Based Bayesian Model During Engineering
Design,” Int. J. Prognostics Health Manage.,8(2), p. 23.
 O’Halloran, B. M., Papakonstantinou, N., and Van Bossuyt, D. L., 2016,
“Cable Routing Modeling in Early System Design to Prevent Cable Failure
Propagation Events,” Reliability and Maintainability Symposium (RAMS),
Tucson, AZ, Jan. 25–28, pp. 1–6.
 Dempere, J., Papakonstantinou, N., O’Halloran, B., and Van Bossuyt, D. L.,
2017, “Risk Modeling of Variable Probability External Initiating Events,”
Reliability and Maintainability Symposium (RAMS), Orland, FL, Feb. 23–26,
 McIntire, M. G., Keshavarzi, E., Tumer, I. Y., and Hoyle, C., 2016,
“Functional Models With Inherent Behavior: Towards a Framework for Safety
Analysis Early in the Design of Complex Systems,” ASME Paper No.
 Dempere, J., Papakonstantinou, N., O’Halloran, B., and Van Bossuyt, D. L.,
2018, “Risk Modeling of Variable Probability External Initiating Events in a
Functional Modeling Paradigm,” J. Reliab., Maintainability, Supportability in
 Otto , K., and Wood, K., 2001, Product Design: Techniques in Reverse Engi-
neering and New Product Design, Prentice Hall, Upper Saddle River, NJ.
 Stone, R. B., and Wood, K. L., 2000, “Development of a Functional Basis for
Design,” ASME J. Mech. Des.,122(4), pp. 359–370.
 Sage, A. P., and Rouse, W. B., 2009, Hand book of Systems Engineering and
Management, Wiley, Hoboken, NJ.
 Kapurch, S. J., 2010, NASA Systems Engineeri ng Handbook, Diane Publish-
ing, Hanover, MD.
 Cornford, S. L., Feather, M. S., and Hicks, K. A., 2001, “DDP—A Tool for
Life-Cycle Risk Management,” Aerospace Conference, Big Sky, MT, Mar.
10–17, pp. 1–441.
 Stamatelatos, M., Dezfuli, H., Apostolakis, G., Everline, C., Guarro, S.,
Mathias, D., Mosleh, A., Paulos, T., Riha, D., Smith, C., Vesely, W., and
Youngblood, R., 2011, “Probabilistic Risk Assessment Procedures Guide for
NASA Managers and Practitioners,” National Aeronautics and Space Admin-
istration, Hanover, MD, Report No. NASA/SP-2011-3421.
 Otto, K. N., and Antonsson, E. K., 1991, “Trade-Off Strategies in Engineering
Design,” Res. Eng. Des.,3(2), pp. 87–103.
 Estefan, J. A., 2007, “Survey of Model-Based Systems Engineering (MBSE)
Methodologies,” INCOSE MBSE Focus Group,25(8), pp. 1–12.
 Wertz, J. R., Everett, D. F., and Puschell, J. J., 2011, Space Mission Engineer-
ing: The New SMAD, Microcosm Press, Hawthorn, CA.
 Sen, P., and Yang, J.-B., 2012, Multiple Criteria Decision Support in Engi-
neering Design, Springer Science and Business Media, London.
 Blanchard, B. S., and Fabrycky, W. J., 1998, Systems Engineering and Analy-
sis, Prentice Hall, Upper Saddle River, NJ.
 Haskins, C., Forsberg, K., Krueger, M., Walden, D., and Hamelin, D., 2006,
Systems Engineering Handbook, INCOSE, Hoboken, NJ.
 Friedenthal, S., Moore, A., and Steiner, R., 2014, A Practical Guide to SysML:
The Systems Modeling Language, Morgan Kaufmann, Waltham, MA.
 Erden, M. S., Komoto, H., van Beek, T. J., D’Amelio, V., Echavarria, E., and
Tomiyama, T., 2008, “A Review of Function Modeling: Approaches and
Applications,” AI Edam,22(2), pp. 147–169.
 Houkes, W., and Vermaas, P. E., 2010, Technical Functions: On the Use and
Design of Artefacts, Vol. 1, Springer Science and Business Media, Berlin.
 Chakrabarti, A., Shea, K., Stone, R., Cagan, J., Campbell, M., Hernandez, N.
V., and Wood, K. L., 2011, “Computer-Based Design Synthesis Research: An
Overview,” ASME J. Comput. Inf. Sci. Eng.,11(2), p. 021003.
 Umeda, Y., Takeda, H., Tomiyama, T., and Yoshikawa, H., 1990, “Function,
Behaviour, and Structure,” Applications of Artiﬁcial Intelligence in Engineer-
ing, J. S. Gero, ed., Springer, Berlin, pp. 177–193.
 Umeda, Y., Ishii, M., Yoshioka, M., Shimomura, Y., and Tomiyama, T., 1996,
“Supporting Conceptual Design Based on the Function-Behavior-State Mod-
eler,” AI Edam,10(4), pp. 275–288.
 Umeda, Y., Tomiyama, T., and Yoshikawa, H., 1995, “FBS Modeling: Model-
ing Scheme of Function for Conceptual Design,” Ninth International Work-
shop on Qualitative Reasoning, Amsterdam, The Netherlands, pp. 271–278.
 Umeda, Y., and Tomiyama, T., 1997, “Functional Rea soning in Design,”
IEEE Expert,12(2), pp. 42–48.
 Tomiyama, T., Umeda, Y., and Yoshikawa, H., 1993, “A CAD for Functional
Design,” CIRP Ann. Manuf. Technol.,42(1), pp. 143–146.
 Yoshioka, M., Umeda, Y., Taked a, H., Shimomura, Y., Nomaguchi, Y., and
Tomiyama, T., 2004, “Physical Concept Ontology for the Knowledge Inten-
sive Engineering Framework,” Adv. Eng. Inf.,18(2), pp. 95–113.
 Shimomura, Y., Yoshioka, M., Takeda, H., Umeda, Y., and Tomiyama, T.,
1998, “Representation of Design Object Based on the Functional Evolution
Process Model,” ASME J. Mech. Des.,120(2), pp. 221–229.
 Kitamura, Y., Kashiwase, M., Fuse, M., and Mizoguchi, R., 2004,
“Deployment of an Ontological Framework of Functional Design Knowl-
edge,” Adv. Eng. Inf.,18(2), pp. 115–127.
 Goel, A. K., Rugaber, S., and Vattam, S., 2009, “Structure, Behavior, and
Function of Complex Systems: The Structure, Behavior, and Function Model-
ing Language,” AI Edam,23(1), pp. 23–35.
 Goel, A. K., and Bhatta, S. R., 2004, “Use of Design Patterns in Analogy-
Based Design,” Adv. Eng. Inf.,18(2), pp. 85–94.
 Bhatta, S., Goel, A., and Prabhakar, S., 1994, “Innovation in Analogical
Design: A Model-Based Approach,” Artiﬁcial Intelligence in Design’94,
Springer, Berlin, pp. 57–74.
 Yaner, P. W., and Goel, A. K., 2006, “From Form to Function: From
SBF to DSSBF,” Design Computing and Cognition’06, Springer, Dordrecht,
The Netherlands, pp. 423–441.
 Bracewell, R. H., and Sharpe, J., 1996, “Functional Descriptio ns Used in Com-
puter Support for Qualitative Scheme Generation–‘Schemebuilder’,” AI
Edam,10(4), pp. 333–345.
 Welch, R. V., and Dixon, J. R., 1992, “Representin g Function, Behavior and
Structure During Conceptual Design,” Fourth International Conference on
Design Theory and Methodology, Scottsdale, AZ, Sept. 13–16, pp. 11–18.
 Welch, R. V., and Dixon, J. R., 1994 , “Guiding Conceptual Design Through
Behavioral Reasoning,” Res. Eng. Des.,6(3), pp. 169–188.
Journal of Computing and Information Science in Engineering SEPTEMBER 2019, Vol. 19 / 031001-9
 Deng, Y.-M., Britton, G., and Tor, S. B., 2000, “Constraint-Based Functional
Design Veriﬁcation for Conceptual Design,” Comput. Aided Des.,32(14), pp.
 Deng, Y.-M., 2002, “Function and Behavior Representation in Conceptual
Mechanical Design,” AI Edam,16(5), pp. 343–362.
 Chakrabarti, A., and Bligh, T. P., 2001, “A Scheme for Functional Reasoning
in Conceptual Design,” Des. Stud.,22(6), pp. 493–517.
 Chakrabarti, A., Sarkar, P., Leelavathamma, B., and Nataraju, B., 2005, “A
Functional Representation for Aiding Biomimetic and Artiﬁcial Inspiration of
New Ideas,” AI Edam,19(2), pp. 113–132.
 Van Wie, M., Bryant, C. R., Bohm, M. R., McAdams, D. A., and Stone, R. B.,
2005, “A Model of Function-Based Representations,” AI Edam,19(2), pp.
 Gero, J. S., 1990, “Design Prototypes: A Knowledge Representation Schema
for Design,” AI Mag.,11(4), p. 26.
 Gero, J. S., and Kannengiesser, U., 2004, “The Situated Function–
Behaviour–Structure Framework,” Des. Stud.,25(4), pp. 373–391.
 Dorst, K., and Vermaas, P. E., 2005, “John Gero’s Function-Behaviour-Structure
Model of Designing: A Critical Analysis,” Res. Eng. Des.,16(1–2), pp. 17–26.
 Snooke, N., and Price, C., 1998, “Hierarchical Functional Reasoning,” Knowl.
Based Syst.,11(5–6), pp. 301–309.
 Chandrasekaran, B., and Josephson, J. R., 2000, “Function in Device Repre-
sentation,” Eng. Comput.,16(3–4), pp. 162–177.
 Chand rasekaran, B., 2005, “Representing Function: Relating Functional Rep-
resentation and Functional Modeling Research Streams,” AI Edam,19(2), pp.
 Keuneke, A. M., 1991, “Device Representati on-the Signiﬁcance of Functional
Knowledge,” IEEE Expert,6(2), pp. 22–25.
 Keuneke, A., and Allemang, D., 1989, “Exploring the No-Function-in-Struc-
ture Principle,” J. Exp. Theor. Artif. Intell.,1(1), pp. 79–89.
 Sen, C., Summers, J. D., and Mocko, G. M., 2011, “A Protoc ol to Formalise
Function Verbs to Support Conservation-Based Model Checking,” J. Eng.
Des.,22(11–12), pp. 765–788.
 Kurtoglu, T., and Campbell, M. I., 2009, “Autom ated Synthesis of Electrome-
chanical Design Conﬁgurations From Empirical Analysis of Function to Form
Mapping,” J. Eng. Des.,20(1), pp. 83–104.
 Sridharan, P., and Campbell, M. I., 2005, “A Study on the Grammatical Con-
struction of Function Structures,” AI Edam,19(3), pp. 139–160.
 Campbell, M. I., 2009, “A Graph Grammar Methodology for Generative Sys-
tems,” University of Texas, Austin, TX, Technical Report No. 2009-001.
 Helms, B., and Shea, K., 2012, “Computational Synthesis of Product Architec-
tures Based on Object-Oriented Graph Grammars,” ASME J. Mech. Des.,
134(2), p. 021008.
 Qian, L., and Gero, J. S., 1996, “Function–Behavior–Structure Paths and Their
Role in Analogy-Based Design,” AI Edam,10(4), pp. 289–312.
 Bohm, M. R., Stone, R. B., and Szykman, S., 2005, “Enhancing Virtual Prod-
uct Representations for Advanced Design Repository Systems,” ASME J.
Comput. Inf. Sci. Eng.,5(4), pp. 360–372.
 McIntire, M. G., 2016, “From Functional Modeling to Optimization: Risk and
Safety in the Design Process for Large-Scale Systems,” Ph.D. thesis, Oregon
State University, Corvallis, OR.
 Keshavarzi, E., McIntire, M., Goebel, K., Tumer, I. Y., and Hoyle, C., 2017,
“Resilient System Design Using Cost-Risk Analysis With Functional Models,”
ASME Paper No. DETC2017-67952.
 Slater, M. R., and Van Bossuyt, D. L., 2015, “Toward a Dedicated Failure
Flow Arrestor Function Methodology,” ASME Paper No. DETC2015-46270.
 Short, A. R., and Van Bossuyt, D. L., 2015, “Active Mission Success Estima-
tion Through Phm-Informed Probabilistic Modelling,” Annual Conference of
the Prognostics and Health Management Society, Coronado, CA, Oct. 18–24,
Publication Control No. 051.
 Short, A.-R., Lai, A. D., and Van Bossuyt, D. L., 2018, “Conceptual Design of
Sacriﬁcial Sub-Systems: Failure Flow Decision Functions,” Res. Eng. Des.,
29(1), pp. 23–38.
 Arlitt, R., Van Bossuyt, D. L., Stone, R. B., and Tumer, I. Y., 2017, “The
Function-Based Design for Sustainability Method,” ASME J. Mech. Des.,
139(4), p. 041102.
 Lynch, K., Ramsey, R., Ball, G., Schmit, M., and Collins, K., 2016,
“Ontology-Driven Metamodel Validation in Cyber-Physical Systems,” Infor-
mation Technology: New Generations, Springer, Cham, Switzerland, pp.
 Sztipanovits, J., Bapty, T., Neema, S., Howard, L., and Jackson, E., 2014,
“Openmeta: A Model-and Component-Based Design Tool Chain for Cyber-
Physical Systems,” Joint European Conferences on Theory and Practice of
Software, Grenoble, France, Apr. 5–13, pp. 235–248.
 Simko, G., Lindecker, D., Levendovszky, T., Neema, S., and Sztipanovits, J., 2013,
“Speciﬁcation of Cyber-Physical Components With Formal Semantics–Integration
and Composition,” International Conference on Model Driven Engineering Lan-
guages and Systems, Miami, FL, Sept. 29–Oct. 4, pp . 471–487.
 Henley, E. J., and Kumamoto, H., 1981, Reliability Engineering and Risk
Assessment, Vol. 568, Prentice Hall, Englewood Cliffs, NJ.
 O’Halloran, B. M., Haley, B., Jensen, D. C., Arlitt, R., Tumer, I. Y., and
Stone, R. B., 2014, “The Early Implementation of Failure Modes Into Existing
Component Model Libraries,” Res. Eng. Des.,25(3), pp. 203–221.
 Department of Defense, 1949, “Procedures for Performing a Failure Mode,
Effects and Criticality Analysis,” Department of Defense, Washington, DC,
 Tumer, I., Barrientos, F., and Mehr, A. F., 2005, “Towards Risk Based Design
(RBD) of Space Exploration Missions: A Review of RBD Practice and
Research Trends at NASA,” ASME Paper No. DETC2005-85100.
 Stone, R. B., Tumer, I. Y., and Van Wie, M., 2005, “The Function-Failure
Design Method,” ASME J. Mech. Des.,127(3), pp. 397–407.
 Lough, K. G., Stone, R., and Tumer, I. Y., 2009, “The Risk in Early Design
Method,” J. Eng. Des.,20(2), pp. 155–173.
 IAEA, 1993, “Deﬁning Initiating Events for Purpose of Probabilistic Safety
Assessment,” International Atomic Energy Agency, Vienna, Austria, Report
 Zamanali, J., 1998, “Probabilistic-Risk-Assessment Applications in the
Nuclear-Power Industry,” IEEE Trans. Reliab.,47(3), pp. SP361–SP364.
 Gilks, W. R., Richardson, S., and Spiegelhalter , D., 1995, Markov Chain
Monte Carlo in Practice, CRC Press, London.
 Gilks, W. R., 2005, “Markov Chain Monte Carlo,” Encyclopedia of Biostatis-
tics, Wiley, Chichester, UK.
 Brooks, S., Gelman, A., Jones, G., and Meng, X.-L., 2011, Handbook of Mar-
kov Chain Monte Carlo, CRC Press, Boca Raton, FL.
 David, P., Idasiak, V., and Kratz, F., 2010, “Reliability Study of Complex Physical
Systems Using SysML,” Reliab.Eng.Syst.Saf.,95(4), pp. 431–450.
 Beck, J. L., and Au, S.-K., 2002, “Bayesian Updating of Structural Models
and Reliability Using Markov Chain Monte Carlo Simulation,” J. Eng. Mech.,
128(4), pp. 380–391.
 Koutra, D., Parikh, A., Ramdas, A., and Xiang, J., 2011, “Algorithms for
Graph Similarity and Subgraph Matching,” Ecology Inference Conference,
Carnegie-Mellon-University, Pittsburgh, PA, Technical Report.
 Mehrpouyan, H., Haley, B., Dong, A., Tumer, I. Y., and Hoyle, C., 2015,
“Resiliency Analysis for Complex Engineered System Design,” AI Edam,
29(1), pp. 93–108.
 Haley, B. M., Dong, A., and Tumer, I. Y., 2016, “A Comparison of Network-
Based Metrics of Behavioral Degradation in Complex Engineered Systems,”
ASME J. Mech. Des.,138(12), p. 121405.
 Fu, K., Chan, J., Cagan, J., Kotovsky, K., Schunn, C., and Wood, K., 2013,
“The Meaning of ‘Near’ and ‘Far’: The Impact of Structuring Design Data-
bases and the Effect of Distance of Analogy on Design Output,” ASME J.
Mech. Des.,135(2), p. 021007.
 Poppa, K., Arlitt, R., and Stone, R., 2013, “An Approach to Automated Con-
cept Generation Through Latent Semantic Indexing,” IIE Annual Conference,
Institute of Industrial and Systems Engineers (IISE), San Juan, Puerto Rico,
May 18–22, p. 151.
 Clarke, E. M., Grumberg, O., and Peled, D., 1999, Model Checking, MIT
Press, Cambridge, MA.
 Ester, M., Kriegel, H.-P., Sander, J., and Xu, X., 1996, “A Density-Based
Algorithm for Discovering Clusters in Large Spatial Databases With Noise,”
Second International Conference on Knowledge Discovery and Data Mining,
Portland, OR, Aug. 2–4, pp. 226–231.
 Arnold, C. R. B., Stone, R. B., and McAdams, D. A., 2008, “Memic: An Inter-
active Morphological Matrix Tool for Automated Concept Generation,” IIE
Annual Conference, Institute of Industrial and Systems Engineers (IISE), Sin-
gapore, Dec. 8–11, p. 1196.
 Poll, S., Patterson-Hine, A., Camisa, J., Garcia, D., Hall, D., Lee, C.,
Mengshoel, O. J., Neukom, C., Nishikawa, D., Ossenfort, J., Sweet, A., Yentus,
S., Roychoudhury, I., Daigle, M., Biswas, G., and Koutsoukos, X., 2007,
“Advanced Diagnostics and Prognostics Testbed,” 18th International Workshop
on Principles of Diagnosis (DX-07), Nashville, TN, May 29–31, pp. 178–185.
 Keller, W., and Modarres, M., 2005, “A Historical Overview of Probabilistic
Risk Assessment Development and Its Use in the Nuclear Power Industry: A
Tribute to the Late Professor Norman Carl Rasmussen,” Reliab. Eng. Syst.
Saf.,89(3), pp. 271–285.
 Bly, M., 2011, Deepwater Horizon Accident Investigation Report, Diane Pub-
lishing, Washington, DC.
 Ramp, I. J., and Van Bossuyt, D. L., 2014, “Toward an Automated
Model-Based Geometric Method of Representing Function Failure Propagation
Across Uncoupled Systems,” ASME Paper No. IMECE2014-36514.
 Dekker, S., Cilliers, P., and Hofmeyr, J.-H., 2011, “The Complexity of Failure:
Implications of Complexity Theory for Safety Investigations,” Saf. Sci.,49(6),
 Scott, M. J., and Antonsson, E. K., 1999, “Arrow ’s Theorem and Engineering
Design Decision Making,” Res. Eng. Des.,11(4), pp. 218–228.
 Kelly, D., and Smith, C., 2011, Bayesian Inference for Probabilistic Risk
Assessment: A Practitioner’s Guidebook, Springer Science and Business
 Van Bossuyt, D. L., O’Halloran, B. M., and Arlitt, R. M., 2018, “Irrational
System Behavior in a System of Systems,” IEEE 13th Annual Conference on
System of Systems Engineering (SoSE), Paris, France, June 19–22, pp.
 Grunske, L., Kaiser, B., and Papado poulos, Y., 2005, “Model-Driven Safety
Evaluation With State-Event-Based Component Failure Annotations,” Interna-
tional Symposium on Component-Based Software Engineering, St Louis, MO,
May 14–15, pp. 33–48.
 O’Halloran, B. M., Papako nstantinou, N., Giammarco, K., and Van Bossuyt,
D. L., 2017, “A Graph Theory Approach to Functional Failure Propagation in
Early Complex Cyber-Physical Systems (CCPSs),” INCOSE International
Symposium, Adelaide, Australia, July 15–20, pp. 1734–1748.
 Shah, J. J., Smith, S. M., and Varg as-Hernandez, N., 2003, “Metrics for Meas-
uring Ideation Effectiveness,” Des. Stud.,24(2), pp. 111–134.
031001-10 / Vol. 19, SEPTEMBER 2019 Transactions of the ASME