ArticlePDF Available

Conceptual design of sacrificial sub-systems: failure flow decision functions

Authors:

Abstract and Figures

This paper presents a method to conceptually model sacrificing non-critical sub-systems, or components, in a failure scenario to protect critical system functionality through a functional failure modeling technique. Understanding the potential benefits and drawbacks of choosing how a failure is directed in a system away from critical sub-systems and toward sub-systems that can be sacrificed to maintain core functionality can help system designers to design systems that are more likely to complete primary mission objectives despite failure events. Functional modeling techniques are often used during the early stage of conceptual design for complex systems to provide a better understanding of system architecture. A family of methods exists that focuses on the modeling of failure initiation and propagation within a functional model of a system. Modeling failure flow provides an opportunity to understand system failure propagation and inform system design iteration for improved survivability and robustness. Currently, the ability to model failure flow decision-making is missing from the family of function failure and flow methodologies. The failure flow decision function (FFDF) methodology presented in this paper enables system designers to model failure flow decision-making problems where functions and flows that are critical to system operation are protected through the sacrifice of less critical functions and flow exports. The sacrifice of less critical system functions and flows allows for mission critical functionality to be preserved, leading to a higher rate of mission objective completion. An example of FFDF application in a physical design is a non-critical peripheral piece of electrical hardware being sacrificed during an electrical surge condition to protect critical electronics necessary for the core functionality of the system. In this paper, a case study of the FFDF method is presented based on a Sojourner class Mars Exploration Rover (MER) platform.
This content is subject to copyright. Terms and conditions apply.
... Approaches have subsequently been presented to use graph grammars to change the structure of the model, and/or use a cost-risk analysis scoring function to compare between design alternatives [143] [142]. Additionally, an approach has been presented for designing the operational decision-making in the model to determine when to, for example, route degraded flows to sacrificial subsystems [256]. While these approaches show many of the design changes that can be made within a functional model, and can be used to compare between design alternatives, they do not use this knowledge to formally optimize a design problem. ...
... model that make them difficult to optimize together. Consider the space of possible variables to explore shown in Table 5.1, which compiles newly-identified model variables with variables identified in previous function-based fault modelling and optimization approaches (see: [143,256,108,188]). ...
... Ability to represent differences in behaviors of functions. Conditional Logic [256] Predicting design cost of flexibility required to allow different decisions to be made. ...
Thesis
It is desirable for complex engineered systems to perform missions efficiently and economically, even when these missions' complex, variable, long-term operational profiles make it likely for hazards to arise. It is thus important to design these systems to be resilient so that they will actively prevent and recover from hazards when they occur. To most effectively design a system to be resilient, the resilience of each design alternative should be quantified and valued so that it can be incorporated in the decision-making process. However, considering resilience in early design is challenging because resilience is a dynamic and stochastic property characterizing how the system performs over time in a set of unlikely-but-salient hazardous scenarios. Quantifying these properties thus requires a model to simulate the system's dynamic behavior and performance over the set of hazardous scenarios. Thus, to be able to incorporate resilience in the design process, there is a need to develop a framework which implements and integrates these models with design exploration and decision-making. This dissertation fulfills this need by defining resilience to enable fault simulations to be incorporated in decision-making, devising and implementing a modelling framework for early assessment of system resilience attributes, and exploring optimization architectures to efficiently structure the design exploration of resilience variables. Additionally, this dissertation provides a validity testing framework to determine when the resilient design process has been effective given the uncertainties present in the design problem. When each of these parts are used together, they comprise an overall framework that can be used to consider and incorporate system resilience in the early design process.
... In functional reliability analysis, Kurtoglu et al. [8] performed failure reasoning on functional failures to avoid the problem of functional failure. To reduce the impact of failure propagation on the system, Short et al. [9] proposed a function failure design method that sacrificed non-critical subsystems to maintain core functions, which could help designers complete primary mission objectives despite failure events. Later, based on the functional dependency network analysis model [10], Guariniello et al. [11] used system operational dependency analysis (SODA) to consider the internal state of the design system and better accounted for stochasticity to improve the efficiency of functional failure analysis. ...
... The algebraic operation of the fuzzy value is shown in Equation (14). Very Strong (VS) (7,9,9) The judgment matrix between sub-requirements is constructed by using the AHP method [19] and its eigenvalues are obtained. On the premise of meeting the consistency index CR, the weight value ( i) of sub-requirement is calculated by Equation (15). ...
... The algebraic operation of the fuzzy value is shown in Equation (14). Very Strong (VS) (7,9,9) The judgment matrix between sub-requirements is constructed by using the AHP method [19] and its eigenvalues are obtained. On the premise of meeting the consistency index CR, the weight value ( i) of sub-requirement is calculated by Equation (15). ...
Article
Full-text available
Reliability is a major performance index in the electromechanical product conceptual design decision process. As the function is the purpose of product design, the risk of scheme design is easy to be caused when there is a failure (i.e., function failure). However, existing reliability analysis models focus on the failure analysis of functions but ignore the quantitative risk assessment of conceptual schemes when function failures occur. In addition, design information with subjectivity and fuzziness is difficult to introduce the risk index into the early design stage for comprehensive decisions. To fill this gap, this paper proposes a conceptual scheme decision model for mechatronic products driven by the risk of function failure propagation. Firstly, the function structure model is used to construct the function fault propagation model, so as to obtain the influence degree of the subfunction failure. Secondly, the principle solution weight is calculated when the function failure is propagated, and the influence degree of the failure mode is integrated to obtain the severity of the failure mode on the product system. Thirdly, the risk value of failure mode is calculated by multiplying the severity and failure probability of failure mode, and the risk value of the scheme is obtained based on the influence relationship between failure modes. Finally, the VIKOR (Višekriterijumska Optimizacija i kompromisno Rešenje) method is used to make the optimal decision for the conceptual scheme, and then take the cutting speed regulating device scheme of shearer as an example to verify the effectiveness and feasibility of the proposed decision model.
... The FFIP family of methods produces cut-sets similar to those developed by PRA and handles truncation of analysis in a similar manner. 59 Nan ...
... Recent research on external initiating events for autonomous robotic systems has indicated that unique emergent system behaviors not predicted by other research methods can be caused by several external initiating events simultaneously occurring and interacting with one another inside of an SoI. 59,89 We suggest that all possible irrationality initiator-dependent com- (1). Note that the formula intentionally subtracts 1 to acknowledge that the baseline case of no irrationality initiators being present in the SoI is assumed to have been previously assessed. ...
... 91 Redundant systems and subsystems 92,93 can be added to provide higher reliability. Sacrificial subsystems or systems 59 can be added to route failure flows caused by irrationality initiators to a location where they can do the least harm. ...
Article
Full-text available
System of interest (SoI) failures can sometimes be traced to an unexpected behavior occurring within another system that is a member of the system of systems (SoS) with the SoI. This article presents a method for use when designing an SoI that helps to analyze an SoS for unexpected behaviors from existing SoS members during the SoI's conceptual functional modeling phase of system architecture. The concept of irrationality initiators—unanticipated or unexpected failure flows emitted from one system that adversely impact an SoI, which appear to be impossible or irrational to engineers developing the new system—is introduced and implemented in a quantitative risk analysis method. The method is implemented in the failure flow identification and propagation framework to yield a probability distribution of failure paths through an SoI in the SoS. An example of a network of autonomous vehicles operating in a partially denied environment is presented to demonstrate the method. The method presented in this paper allows practitioners to more easily identify potential failure paths and prioritize fixing vulnerabilities in an SoI during functional modeling when significant changes can still be made with minimal impact to cost and schedule.
... Approaches have subsequently been presented to use graph grammars to change the structure of the model, and/or use a cost-risk analysis scoring function to compare between design alternatives [38,39]. Additionally, an approach has been presented for designing the operational decision-making in the model to determine when to, for example, route degraded flows to sacrificial subsystems [40]. While these approaches show many of the design changes that can be made within a functional model, and can be used to compare between design alternatives, they do not use this knowledge to formally optimize a design problem. ...
... A variety of design changes, which may be pursued in the context of function-based fault modeling, have been presented previously in Refs. [38] and [40], which are compiled along with changes pursued in other functionfailure optimization approaches [43,44] along with new design changes which have been identified by the authors in Table 3. As can be seen in the right side of the table, each design change has associated potential difficulties, which may make it difficult to effectively model or predict its effect, but also may provide value depending on the design problem considered. ...
... Redundancy Predicting effect of potential performance couplings Easy to model and optimize. Enables consideration of redundancy without changing model structure Assumed realization/function resources [43] Potential internal and external compatibility couplings Ability to represent trade-off between cost and quality (mode probabilities and costs as well as function costs) Function modes Couplings with assumed realization Ability to represent differences in behaviors of functions Conditional logic [40] Predicting design cost of flexibility required to allow different decisions to be made ...
Article
Complex engineered systems can carry risk of high failure consequences, and it is desirable for complex engineered systems to be resilient such that they can avoid or quickly recover from faults. Ideally, this should be done at the early design stage where designers are most able to explore a large space of concepts. Previous work has shown that functional models can be used to predict fault propagation behavior and motivate design work. However little has been done to formally optimize or compare designs based on these predictions, partially because the effects of these models have not been quantified into an objective function. This work closes this gap by introducing the resilience-informed scenario cost sum (RISCS), a scoring function which integrates with a fault scenario-based simulation, to enable the optimization and evaluation of functional model resilience. The scoring function accomplishes this by quantifying the expected cost of a design's fault response using probability information, and combining this cost with design and operational costs such that it may be parameterized in terms of designer-specified resilient features. The scoring function is applied to a monopropellant system design-to the optimization of resilient features and the evaluation of possible design variants. Using RISCS as an objective for optimization, the algorithm generates the design solution which provides the optimal trade-off between design cost and risk. For concept selection, RISCS may be used to judge whether resilient concepts justify their design costs and to make direct comparisons between different model structures.
... The inherent behavioral in functional models (IBFM) framework extends FFIP to include the ability to generate multiple functional models to drive toward a solution that can balance the cost and risk of a system, and a pseudo time-step [16,68,69]. A number of other risk and failure analysis tools have been developed from FFIP including the uncoupled failure flow state reasoner [11,70], a method of building prognostic systems in response to failure modeling [12], and other related methods and tools [13,14,[71][72][73]. Several tools for ontology-driven metamodeling and early conceptual design down-selection were produced as part of the Defense Advanced Research Program Agency (DARPA) adaptive vehicle make project [74][75][76]. ...
... While the case study mentioned previously demonstrates evolving a model toward a solution that distributes failure flow concentrations across the model by adding in redundancy, specific system design considerations may warrant concentrating failed flows into a few specific flows. Concentrating failure flow into a few flows may be beneficial, for instance, if systems engineers are including sacrificial subsystems [72]. In other situations, it may be beneficial to spread out failure flows across several redundant subsystems [99]. ...
Article
Full-text available
A challenge systems engineers and designers face when applying system failure risk assessment methods such as probabilistic risk assessment (PRA) during conceptual design is their reliance on historical data and behavioral models. This paper presents a framework for exploring a space of functional models using graph rewriting rules and a qualitative failure simulation framework that presents information in an intuitive manner for human-in-the-loop decision-making and human-guided design. An example is presented wherein a functional model of an electrical power system testbed is iteratively perturbed to generate alternatives. The alternative functional models suggest different approaches to mitigating an emergent system failure vulnerability in the electrical power system's heat extraction capability. A preferred functional model configuration that has a desirable failure flow distribution can then be identified. The method presented here helps systems designers to better understand where failures propagate through systems and guides modification of systems functional models to adjust the way in which systems fail to have more desirable characteristics.
... The Inherent Behavioral in Functional Models (IBFM) framework extends FFIP to include the ability to generate multiple functional models to drive toward a solution that can balance the cost and risk of a system, and a pseudo time step [15,68,69]. A number of other risk and failure analysis tools have been developed from FFIP including the Uncoupled Failure Flow State Reasoner [11,70], a method of building prognostic systems in response to failure modeling [12], and other related methods and tools [13,14,[71][72][73]. Several tools for ontology-driven metamodeling and early conceptual design down-selection were produced as part of the Defense Advanced Research Program Agency (DARPA) Adaptive Vehicle Make project [74][75][76]. ...
... While the case study above demonstrates evolving a model toward a solution that distributes failure flow concentrations across the model by adding in redundancy, specific system design considerations may warrant concentrating failed flows into a few specific flows. Concentrating failure flow into a few flows may be beneficial, for instance, if systems engineers are including sacrificial subsystems [72]. In other situations, it may be beneficial to spread out failure flows across several redundant subsystems [99]. ...
Conference Paper
Full-text available
A challenge systems engineers and designers face when applying system failure risk assessment methods such as Proba-bilistic Risk Assessment (PRA) during conceptual design is their reliance on historical data and behavioral models. This paper presents a framework for exploring a space of functional models using graph rewriting rules and a qualitative failure simulation framework that presents information in an intuitive manner for human-in-the-loop decision-making and human-guided design. An example is presented wherein a functional model of an electrical power system is iteratively perturbed to generate alternatives. The alternative functional models suggest different approaches to mitigating an emergent system failure vulnerability in the electrical power system's the heat extraction capability. A preferred functional model configuration that has a desirable failure flow distribution can then be identified. The method presented here helps systems designers to better understand where failures propagate through systems and guides modification of systems functional models to adjust the way in which systems fail to have more desirable characteristics. INTRODUCTION The design, manufacture, and deployment of complex systems requires extensive investment of personnel, resources, time, and money to produce systems that meet requirements [1, 2].
... Prior work has used methods such as Bayesian hierarchical clustering to help identify the most advantageous component solutions to functions based on information that is brought up into the conceptual design phase which otherwise would be developed much later in the system design process [19]. Methods of analyzing failure, risk, and reliability in systems take a similar approach where detailed failure information is developed early on and used in reliability and risk analyses during conceptual design to promote better risk-informed decision-making which is intended to speed the entire system development process by reducing the need for re-design or the late addition of subsystems to address specific threats or vulnerabilities to the system [20][21][22][23][24]. ...
Conference Paper
Full-text available
We introduce a method to help protect against and mitigate possible consequences of major regional and global events that can disrupt a system design and manufacturing process. The method is intended to be used during the conceptual phase of system design when functional models have been developed and component solutions are being chosen. Disruptive events such as plane crashes killing many engineers from one company travel-ing together, disease outbreaks killing or temporarily disabling many people associated with one industrial sector who travel to the same conference regularly, geopolitical events that impose tariffs or complete cessation of trade with a country that supplies a critical component, and many other similar physical and virtual events can significantly delay or disrupt a system design process. By comparing alternative embodiment, component, and low-level functional solutions, solutions can be identified that better pass the bus factor where no one disruptive event will cause a major delay or disruption to a system design and manufacturing process. We present a simplified case study of a renewable energy generation and storage system intended for residential use to demonstrate the method. While some challenges to immediate adoption by practitioners exist, we believe the method has the potential to significantly improve system design processes so that systems are designed, manufactured, and delivered on schedule and on budget from the perspective of significant dis-ruptive events to design and manufacturing.
... Refs [30,31] and Failure Modes and Effects Analysis (FMEA) methods [32]) and modelbased methods (in which the effects of a hazard are encoded in a model or simulation e.g., [33][34][35][36]). Decision-making approaches for early design methods also vary, with some approaches relying solely on the designer's judgement of the analysis [30,31] and others seeking to either optimize or satisfy constraints for modelled design metrics, such as reliability [34] or mission success probability [37]. Additionally, cost-based frameworks for risk consideration have been presented [38,39] to explicitly balance the risk-prevention of fault-mitigating features against their design and operational costs. ...
... This work has been extended is a variety of ways. 62 Krus and Grantham-Lough propose a method identify failure propagation through common interface; however, this method does not fully investigate failure propagation paths. 73 One overarching limitation of this body of research is that it lacks an approach to quantify failure propagation prior to components being selected. ...
Article
Full-text available
An open area of research for complex, cyber‐physical systems is how to adequately support decision making using reliability and failure data early in the systems engineering process. Having meaningful reliability and failure data available early offers information to decision makers at a point in the design process where decisions have a high impact to cost ratio. When applied to conceptual system design, widely used methods such as probabilistic risk analysis (PRA) and failure modes effects and criticality analysis (FMECA) are limited by the availability of data and often rely on detailed representations of the system. Further, existing methods for system reliability and failure methods have not addressed failure propagation in conceptual system design prior to selecting candidate architectures. Consideration given to failure propagation primarily focuses on the basic representation where failures propagate forward. In order to address the shortcomings of existing reliability and failure methods, this paper presents the function failure propagation potential methodology (FFPPM) to formalize the types of failure propagation and quantify failure propagation potential for complex, cyber‐physical systems during the conceptual stage of system design. Graph theory is leveraged to model and quantify the connectedness of the functional block diagram (FBD) to develop the metrics used in FFPPM. The FFPPM metrics include (i) the summation of the reachability matrix, (ii) the summation of the number of paths between nodes (i.e., functions) i and j for all i and j, and (iii) the degree and degree distribution. In plain English, these metrics quantify the reachability between functions in the graph, the number of paths between functions, and the connectedness of each node. The FFPPM metrics can then be used to make candidate architecture selection decisions and be used as early indicators for risk. The unique contribution of this research is to quantify failure propagation potential during conceptual system design of complex, cyber‐physical systems prior to selecting candidate architectures. FFPPM has been demonstrated using the example of an emergency core cooling system (ECCS) system in a pressurized water reactor (PWR).
... Refs [30,31] and Failure Modes and Effects Analysis (FMEA) methods [32]) and modelbased methods (in which the effects of a hazard are encoded in a model or simulation e.g., [33][34][35][36]). Decision-making approaches for early design methods also vary, with some approaches relying solely on the designer's judgement of the analysis [30,31] and others seeking to either optimize or satisfy constraints for modelled design metrics, such as reliability [34] or mission success probability [37]. Additionally, cost-based frameworks for risk consideration have been presented [38,39] to explicitly balance the risk-prevention of fault-mitigating features against their design and operational costs. ...
Article
Full-text available
A number of risk and resilience-based design methods have been put forward over the years that seek to provide designers the tools to reduce the effects of potential hazards in the early design phase. However, because of the associated high level of uncertainty and low-fidelity design representations, one might justifiably wonder if using a resilient design process in the early design phase will reliably produce useful results that would improve the realized design. This paper provides a testing framework for design processes which determines the validity of the process by quantifying the epistemic uncertainty in the assumptions used to make decisions. This framework uses this quantified uncertainty to test whether three metrics are within desirable bounds: the change in the design when uncertainty is considered, the increase in the expected value of the design, and the cost of choice-related uncertainty. This approach is illustrated using two examples to demonstrate how both discrete and continuous parametric uncertainty can be considered in the testing procedure. These examples show that early design process validity is sensitive to the level of uncertainty and magnitude of design changes, suggesting that while there is a justifiable decision-theoretic case to consider high-level, high-impact design changes during the early design phase, there is less of a case to choose between relatively similar design options because the cost of making the choice under high uncertainty is greater than the expected value improvement from choosing the better design.
Conference Paper
Full-text available
Autonomous systems operating in dangerous and hard-to-reach environments such as defense systems deployed into enemy territory, petroleum installations running in remote arctic and offshore environments, or space exploration systems operating on Mars and further out in the solar system often are designed with a wide operating envelope and deployed with control systems that are designed to both protect the system and complete mission objectives, but only when the on-the-ground environment matches the expected and designed for environment. This can lead to overly conservative operating strategies such as preventing a rover on Mars from exploring a scientifically rich area due to potential hazards outside of the original operating envelope and can lead to unanticipated failures such as the loss of underwater autonomous vehicles operating in Earth's oceans. This paper presents an iterative method that links computer simulation of operations in unknown and dangerous environments with conceptual design of systems and development of control system algorithms. The Global to Local Path Finding Design and Operation Exploration (GLPFDOE) method starts by generating a general mission plan from low resolution environmental information taken from remote sensing data (e.g.: satellites, plane flyovers , telescope observations, etc.) and then develops a detailed path plan from simulated higher-resolution data collected " in situ " during simulator runs. GLPFDOE attempts to maximize system survivability and scientific or other mission objective yield through iterating on control system algorithms and system design within an in-house-developed physics-based autonomous vehicle 1 Corresponding author. and terrain simulator. GLPFDOE is best suited for autonomous systems that cannot have easy human intervention during operations such as in the case of robotic exploration reaching deeper into space where communications delays become unacceptably large and the quality of a priori knowledge of the environment becomes lower fidelity. Additionally, in unknown extraterrestrial environments, a variety of unexpected hazards will be encountered that must to be avoided and areas of scientific interest will be found that must be explored. Existing exploratory platforms such as the Mars Exploratory Rovers (MERs) Curiosity and Opportunity either operate in environments that are sufficiently removed from immediate danger or take actions slowly enough that the signal delay between the system and Earth-based operators is not too great to allow for human intervention in hazardous scenarios. Using the GLPFDOE methodology, an autonomous exploratory system can be developed that may have a higher likelihood of survivability, can accomplish more scientific mission objectives thus increasing scientific yield, and can decrease risk of mission-ending system damage. A case study is presented in which an autonomous Mars Exploration Rover (MER) is generated and then refined in a simulator using the GLPFDOE method. Development of the GLPFDOE methodology allows for the execution of more complex missions by autonomous systems in remote and inaccessible environments.
Conference Paper
Full-text available
Operation of autonomous and semi-autonomous systems in hostile and expensive-to-access environments requires great care and a risk-informed operating mentality to protect critical system assets. Space exploration missions, such as the Mars Exploration Rover systems Opportunity and Curiosity, are very costly and difficult to replace. These systems are operated in a very risk-averse manner to preserve the functionality of the systems. By constraining system operations to risk-averse activities, scientific mission goals cannot be achieved if they are deemed too risky. We present a quantifiable method that increases the lifetime efficiency of obtaining scientific goals via the implementation of the Goal-Oriented, Risk Attitude-Driven Reward Optimization (GORADRO) method and a case study conducted with simulated testing of the method. GORADRO relies upon local area information obtained by the system during operations and internal Prognostics and Health Management (PHM) information to determine system health and potential localized risks such as areas where a system may become trapped (e.g.: sand pits, overhangs, overly steep slopes, etc.) while attempting to access scientific mission objectives through using an adaptable operating risk attitude. The results of our simulations and hardware validation using GORADRO show a large increase in the lifetime performance of autonomous rovers in a variety of environments, terrains, and situations given a sufficiently tuned set of risk attitude parameters. Through designing a GORADRO behavioral risk attitude set of parameters, it is possible to increase system resilience in unknown and dangerous environments encountered in space exploration and other similarly hazardous environments.
Article
Full-text available
In this paper, we introduce the system operational dependency analysis methodology. Its purpose is to assess the effect of dependencies between components in a monolithic complex system, or between systems in a system-of-systems, and to support design decision making. We propose a parametric model of the behavior of the system. This approach results in a simple, intuitive model, whose parameters give a direct insight into the causes of observed, and possibly emergent, behavior. Using the proposed method, designers, and decision makers can quickly analyze and explore the behavior of complex systems and evaluate different architecture under various working conditions. Thus, the system operational dependency analysis method supports educated decision making both in the design and in the update process of systems architecture, without the need to execute extensive simulations. In particular, in the phase of concept generation and selection, the information given by the method can be used to identify promising architectures to be further tested and improved, while discarding architectures that do not show the required level of global features. Application of the proposed method to a small example is used to demonstrate both the validation of the parametric model, and the capabilities of the method for system analysis, design and architecture.
Conference Paper
Full-text available
Algorithms used in rovers for route planning often focus on finding the shortest path between two points, but rarely take into account the risk to the physical roving system of taking a path. One issue presented by route planning optimized for risk is varying risk attitudes, which can lead to vastly different routes being chosen. A risk attitude is a preference concerning acceptable levels of risk to perform a specific action. The field of Prognostics and Health Management (PHM) aims to predict and prevent mechanical failure in electrical and mechanical systems, and can be used to inform route planning by assessing risk associated with taking an action. A method has been developed and is presented in this paper for Risk Attitude Informed Route-planning (RAIR) that takes into account the calculated risk, the benefit, and risk attitude and selects the optimal route. The risks to the rover will be calculated by using rover PHM data, terrain information, and Function Failure Identification Propagation (FFIP) to determine risk of specific routes. The route is navigated incrementally by selecting the best route across a small segment and then determining the best route from the new position until the rover has reached the final destination. Results of experiments utilizing a simulated planetary rover navigating between points using RAIR are presented in this paper and the effectiveness of the method is discussed. Improved route planning through RAIR enables more autonomous navigation of hazardous and remote environments that accurately reflects the desired risk attitude without direct human planning or interaction than is currently available, thus reducing cost and time for exploratory rover missions to accomplish mission objectives.
Conference Paper
Full-text available
Functional modelling methods used in the early conceptual phases of complex system design allow system designers to better understand and refine system architecture from a functional perspective. A family of methods exist to model functional failures and failure flows. These failure flow modelling methods provide the opportunity to understand potential system failure sources and redesign systems for more robustness. One area lacking from the family of function failure and flow methodological family is the ability to model failure flow decision-making. This paper presents the Function Flow Decision Functions (FFDF) methodology that allows system designers to model failure flow decision-making where critical functions and flow exports are protected from failure flows by sacrificing less critical functions and flow exports. By sacrificing less critical functions and flow exports, mission-critical functions and flow exports can be preserved in order to accomplish the primary mission objectives of a system. A case study based upon the Mars Exploration Rover platform is presented in this paper.
Article
Full-text available
Critical considerations in engineering enterprise systems are identifying, representing, and measuring dependencies between suppliers of technologies and providers of services to consumers and users. The importance of this problem is many-fold. Primary is enabling the study of ripple effects of failure in one capability on other dependent capabilities across the enterprise. Providing mechanisms to anticipate these effects early in design enables engineers to minimize dependency risks that, if realized, can have cascading negative effects on the ability of an enterprise to deliver services to users. The approach to this problem is built upon concepts from graph theory. Graph theory enables (1) a visual representation of complex interrelationships between entities and (2) the design of analytical formalisms that trace the effects of dependencies between entities as they affect many parts and paths in a graph. In this context, an engineering system is represented as a directed graph whose entities are nodes that depict direction, strength, and criticality of supplier-provider relationships. Algorithms are designed to measure capability operability (or inoperability) due to degraded performance (or failure) in supplier and program nodes within capability portfolios that characterize the system. Capturing and analyzing dependencies is not new in systems engineering. New is tackling this problem (1) in an enterprise systems engineering context where multidirectional dependencies can exist at many levels in a system's capability portfolio and (2) by creating a flexible analysis and measurement approach applicable to any system's capability portfolio, whose supplier-provider relationships can be represented by graph theoretic formalisms. The methodology is named Functional Dependency Network Analysis (FDNA). Its formulation is motivated, in part, by concepts from Leontief systems, the Inoperability Input-Output Model (IIM), Failure Modes and Effects Analysis (FMEA), and Design Structured Matrices (DSM). FDNA is a new analytic approach. One that enables management to study and anticipate the ripple effects of losses in supplier-program contributions on a system's dependent capabilities before risks that threaten these suppliers are realized. An FDNA analysis identifies whether the level of operability loss, if such risks occur, is tolerable. This enables management to better target risk resolution resources to those supplier programs that face high risk and are most critical to a system's operational capabilities. KEY WORDS: Risk, capability risk, capability portfolio, dependencies, operability, inoperability, engineering systems, Leontief matrix, design structured matrix (DSM), failure mode and effects analysis (FMEA), inoperability input-output model (IIM), functional dependency network analysis (FDNA).
Article
This research defines the basis for a new quantitative approach for retrieving useful analogies for innovation based on the relevant performance characteristics of functions. The concept of critical functionality is the idea of identifying only a certain set of pertinent design functions observed in a single domain that significantly define the functionality of the product. A critical function (CF) is a function within a functional model whose performance directly relates to a key performance parameter (KPP) of the system as a whole. These CFs will enable multiple analogies to be presented to a designer by recognizing similar functionality across distant design domains and incorporating key performance criteria. The ultimate focus of this research project is to create a performance-metric-based analogy library, called the design analogy performance parameter system (DAPPS). By focusing on a select set of "critical" functions, more design domains can be included in the database facilitating analogy retrieval founded on the qualification of KPPs.