Conference PaperPDF Available

Intelligent reliability analysis tool for fault-tolerant system design

Authors:

Abstract and Figures

Reliable design of mission critical systems requires a special class of engineering methods and tools. As a fundamental technique, Markov modeling has played a significant role in fault-tolerant and fail-safe system reliability analysis dating back to the 1960's. Early applications included flight control system design for military aircraft and spacecraft. Although Markov reliability modeling techniques are well researched, they have not been widely used because of the lack of affordable, easy-to-use reliability modeling and analysis tools. In particular, we need to integrate the capabilities of expert systems (ES) with the computer-aided analysis tools. Thus, our goal is the reuse of the earlier developed Markov methods and tools within the new highly-interactive computing environment.
Content may be subject to copyright.
DAINA-DEFTAS-FR-1
Advanced Tools for Evaluating Fault-Tolerant Systems
(Phase I)
Final Report
Contract Number NAS1-20035
January 1994
SBIR RIGHTS NOTICE (JUN 1987)
These SBIR data are furnished with SBIR rights under Contract NAS1-20035. For a period
of 2 years after acceptance of all items to be delivered under this contract, the Government
agrees to use these data for Government purposes only, and they shall not be disclosed
outside Government (including disclosure for procurement purposes) during such period
without permission of Contractor, except that, subject to the foregoing use and disclosure
prohibitions, such data may be disclosed for use by support Contractors. After the afore-
said 2-year period the Government has a royalty-free license to use, and to authorize oth-
ers to use on its behalf, these data for Government purposes, but is relieved of all
disclosure prohibitions and assumes no liability for unauthorized use of these data by thir
parties. This notice shall be affixed to any reproductions of these data, in whole or in part.
Prepared for:
National Aeronautics and Space Administration
Langley Research Center
Hampton, VA 23681-0001
Prepared by:
DAINA
Columbia Heights, Minnesota 55421
Advanced Design Tools for Evaluating Fault-Tolerant Systems
P. R. Pukite and J. Pukite
DAINA
Columbia HTS, MN 55421-1916
C NAS1-20035
DAINA-DEFTAS-FR-1
NASA Langley Research Center
Hampton, VA 23665-5225
This is a Small Business Innovative Research Program, Phase I
These SBIR data are furnished with SBIR rights under Contract
NAS1-20035. (Over).
As mission critical avionics systems increase in complexity, the evaluation of their performance and reliability
will become more difficult. We have selected and integrated a group of reliability and performance analysis
tools, specifically designed to handle the fault-tolerant operational aspects of the next-generation avionics
architectures. Prototypes of these tools support both hardware and software evaluation and have been devel-
oped in the Ada programming language. The reliability analysis uses the CARMS program (computer-aided
rate modeling and simulation), with an interactive graphical and spreadsheet interface. The performance anal-
ysis tools use Petri nets, VHDL, and Monte Carlo simulation methods. A dynamically embedded Ada expert
system (AES) helps guide the proper method selection and controls the evaluation and sensitivity analysis. A
unified model representation for performance and reliability structured in terms of Prolog-style AES relational
rules and VHDL was also investigated. Personal computers were used for the initial prototyping and simula-
tion. We expect that the availability of these tools will permit earlier evaluation of the effectiveness of the
selected architecture and thus avoid the need for later changes in the design.
Avionics, Reliability, Performance, CARMS, AES, Expert System, Integration, Ada,
VHDL, EDIF, Systems, Fault-Tolerant, CAD, Modeling, Simulation, Prolog, Tools
UNCLASSIFIED UNCLASSIFIED UNCLASSIFIED
233
UL
January 1994 FINAL June 1993 to December 1993
12a Continued:
For a period of 2 years after acceptance of all items to be delivered under this contract, the Gov-
ernment agrees to use these data for Government purposes only, and they shall not be disclosed
outside Government (including disclosure for procurement purposes) during such period without
permission of Contractor, except that, subject to the foregoing use and disclosure prohibitions,
such data may be disclosed for use by support Contractors. After the aforesaid 2-year period the
Government has a royalty-free license to use, and to authorize others to use on its behalf, these
data for Government purposes, but is relieved of all disclosure prohibitions and assumes no liabil-
ity for unauthorized use of these data by third parties. This notice shall be affixed to any reproduc-
tions of these data, in whole or in part.
i
Project Summary
Purpose. For complex fault-tolerant avionics architectures, trade-offs in performance and reliability occur
early in the design cycle. Ideally, an integrated tool supporting a unified model of both measures would
provide a solid foundation during all stages of the system design cycle. Such a model would avoid the
duplication of data entry necessary for the otherwise divergent tool structures. As an example, when add-
ing redundancy to a fault-tolerant design, the designer typically must alternate between different tool for-
mats to analyze reliability and performance. Therefore, any tools designed to universally handle the fault-
tolerant operational aspects of the next-generation avionics architectures will be advantageous.
In addition to integration and unification, other tool improvements will be needed. In particular, as avionics
missions and systems increase in complexity, the total system evaluation will become more difficult. Tools
that simplify user input, so that more analysis can be done in less time (through model generators, transla-
tors, etc.) will be of benefit.
Approach. The first step was to determine the feasibility of integrating system-level performance and reli-
ability tools, by identifying current tools and tool requirements. The tools must fit into a conceptual model
that allows flexibility and interactions in performability parameters. Our approach uses the capabilities of
an expert system to perform tasks that would otherwise be done manually. We developed the interactive,
graphical CARMS (computer-aided rate modeling and simulation) reliability evaluator along with a
dynamically embedded Ada expert system (AES) to test integration concepts. The common knowledge-
base for integrating and driving the tools allows description and models to be integrated (e.g. VHDL,
Markov models, Petri nets). Both translators and generators for evaluating hardware or software models
were developed.
In addition, system partitioning for solving large problems and system effectiveness was demonstrated.
Finally the use of engineering models in Ada and the potential of VHDL approaches for system and soft-
ware modeling were reviewed. Throughout the study, the use of Windows, Ada and automatic code gener-
ation was stressed. This allowed usable and reliable tools to be created.
Results. We found that models can be structured via a combination of relational data in terms of the Pro-
log-style AES knowledgebase and VHDL. Based on this concept, we demonstrated the expert system
integration concept through a range of examples, including using the expert system to guide the proper
method selection and to control evaluation and sensitivity analysis. Fundamentally, we found that the key
to integrating the tools was to allow access to a common knowledgebase. Thus, in a method similar to how
we convert Ada specifications into an AES knowledgebase, we should similarly be able to extract VHDL
into AES.
The feasibility of the overall approach was demonstrated through operational prototypes of the principle
tools all in Ada. Engineering models of architectures in Ada are also useful but cannot be completely
verified through testing. We expect that the availability of these tools will permit earlier evaluation of the
effectiveness of the selected architecture and thus avoid the need for later changes in the design.
Applications. We will use this approach in commercial applications, including CARMS and AES. New
tools also include the cost-effectiveness optimizer (CEO) trade-off tool and translation tools for VHDL and
Ada. The future focus will be on PC and Windows to attract users, as the techniques for integrating tools
work effectively in a GUI environment. This concept will have potential for application in commercial
flight control and flight management system design.
ii
Report Contents
The Phase I final report consists of 10 sections and 8 appendices. The sections are, for the most part, self-
contained.
Section 1 - Project Overview and Task Accomplishments gives the task accomplishments according to
the proposed work plan and the contract specified statement of work.
Section 2 - Fault-Tolerant System Definition and Design Trade-offs gives the designer’s perspectives
and issues relating to the fault-tolerant system design and trade-off tasks.
Section 3 - Models and Design Tools is background on currently available performance and reliability
tools.
Section 4 - Tool Integration lists some of the critical choices a tool designer needs to make when creating
a usable environment for a set of analysis tools.
Section 5 - Practical Design of Fault-Tolerant Systems looks at issues of reducing state space complex-
ity and at approaches to approximate computations of system failure probabilities, as well as implementing
monitoring techniques.
Section 6 - Design Environment for Fault-Tolerant Analysis and Simulation describes our selected
approach for tool integration and use of expert system support.
Section 7 - Model Definition and Evaluation is the analysis of a selected avionics architecture using our
prototype tool environment.
Section 8 - Complex System Optimization deals with techniques and tools for choosing the best system
configuration meeting system requirements.
Sections 9, 10 - Conclusions and Recommendations.
Appendix A, B - CARMS User’s Guide and CARMS Reference describe the Markov modeling tool.
Appendix C - Ada Expert System (AES) is a reference manual for the expert system front end for
CARMS.
Appendix D - Ada for MS-Windows Applications describes techniques for developing the tools.
Appendix E - Automated Interface Code Generation from Ada Specifications describes techniques
used for generating interfaces to the expert system and dialog boxes.
Appendix F - Ada Tasking for Simulation Applications describes an Ada tasking simulation example
using the CARMS program.
Appendix G, H - List of Acronyms and References
Both the Windows CARMS and AES software executables are included in the delivered report.
iii
Table of Contents
1 Project Overview and Task Accomplishments ........................................................1
1.1 Integrated Performance and Reliability Tool Framework (Task 1).....................................2
1.1.1 Fault-Tolerant System Design Methodology ........................................................ 3
1.1.2 Unified Model and Design Tool Integration......................................................... 5
1.2 Reliability Models (Task 2)................................................................................................. 8
1.2.1 Hardware Reliability Models................................................................................ 8
1.2.2 Software Reliability Models ............................................................................... 10
1.2.3 Reliability Model Extensions.............................................................................. 10
1.3 Performance Modeling (Task 3)........................................................................................ 12
1.3.1 System Performance ...........................................................................................12
1.3.2 Hardware Performance Evaluation..................................................................... 12
1.3.3 Software Performance Evaluation ...................................................................... 13
1.3.4 Supported Performance Evaluation Models ....................................................... 13
1.4 Model Interface and Integration (Task 4).......................................................................... 13
1.4.1 Computation and Evaluation Control ................................................................. 13
1.4.2 Remote Tool Integration ..................................................................................... 13
1.4.3 Performance and Reliability Model Interface Architecture................................ 14
1.5 Prototype Development and Demonstration (Task 5) ....................................................... 14
1.5.1 Prototype Development Platform ....................................................................... 14
1.5.2 Prototype Accomplishments ............................................................................... 14
2 Fault-Tolerant System Definition and Design Trade-offs......................................15
2.1 System Definition.............................................................................................................. 15
2.1.1 Mission Definition .............................................................................................. 16
2.1.2 Goal Definition ................................................................................................... 17
2.1.3 System Requirement Definition.......................................................................... 17
2.1.4 Subsystem Requirement Allocation.................................................................... 17
2.1.5 Subsystem Definition.......................................................................................... 17
2.2 System Design Process...................................................................................................... 19
2.2.1 Architecture Specification .................................................................................. 21
2.2.2 System Trade-off Analysis.................................................................................. 21
2.3 System Effectiveness.........................................................................................................2
2
2.3.1 System Effectiveness Analysis ........................................................................... 22
2.3.2 System Effectiveness Model Construction.........................................................24
2.3.3 System Effectiveness Computation .................................................................... 26
2.4 Cost Effectiveness Analysis.............................................................................................. 28
3 Models and Design Tools.......................................................................................30
3.1 Models............................................................................................................................... 30
3.2 Design and Analysis Tools................................................................................................31
3.3 Fault-Tolerant System Design Tools................................................................................. 33
3.3.1 Reliability Evaluation Tools for Fault-Tolerant Systems.................................... 33
3.3.2 Performance Evaluation Tools for Fault-Tolerant Systems ................................ 35
3.3.3 Electronic System Design Automation Tools Supporting VHDL ...................... 36
4 Tool Integration......................................................................................................38
4.1 Design Problem Environment ........................................................................................... 39
4.1.1 Design Tasks and Tools ...................................................................................... 39
4.1.2 Design Task Automation..................................................................................... 40
iv
4.1.3 Design Tool Selection......................................................................................... 40
4.2 Methodology for Design Tool Integration ........................................................................ 40
4.2.1 Data and Control Integration .............................................................................. 43
4.2.2 Database Support ................................................................................................ 44
4.2.3 Language Integration .......................................................................................... 44
4.2.4 User Interface......................................................................................................44
4.2.5 Supervisory Program .......................................................................................... 45
4.2.6 Configuration Management ................................................................................ 46
4.3 Benefits and Limitations of Integrated Tools .................................................................... 46
4.4 Selected Approach............................................................................................................. 47
4.5 Expected Future Developments ........................................................................................ 48
5 Practical Design of Fault-Tolerant Systems...........................................................49
5.1 Reliability Analysis........................................................................................................... 49
5.1.1 Reliability Problem Solution .............................................................................. 49
5.1.2 Handling of Large and Complex Systems .......................................................... 50
5.1.3 Markov Model Reliability Programs .................................................................. 50
5.1.4 Accuracy of Computation................................................................................... 50
5.1.5 Recovery Process................................................................................................ 51
5.1.6 Practical Design Approach ................................................................................. 51
5.1.7 Design Illustration............................................................................................... 52
5.1.8 Errors due to Approximations............................................................................. 55
5.1.9 Summary of Design Example ............................................................................. 56
5.2 System State Mapping....................................................................................................... 57
5.2.1 Mapping System Operational States................................................................... 57
5.2.2 System Capability Mapping................................................................................ 59
5.2.3 Mapping Cluster States.......................................................................................60
5.3 Advanced Software Fault Tolerant Technology................................................................ 62
5.3.1 Software Fault Statistics ..................................................................................... 62
5.3.2 State-of-the-Practice in Software Fault Tolerance.............................................. 63
5.3.3 Basics of Fault-Tolerant Software Design.......................................................... 63
5.3.4 Software Fault Monitoring..................................................................................64
5.3.5 Design and Development Tools .......................................................................... 67
5.4 Fault-Tolerant System Design Guidelines......................................................................... 68
6 Design Environment for Fault-Tolerant Analysis and Simulation ........................69
6.1 Prototype for Intelligent Tool Architecture....................................................................... 69
6.1.1 CARMS - The Main Program............................................................................. 70
6.1.2 AES - The Expert System Front End.................................................................. 70
6.1.3 Extended Methods .............................................................................................. 70
6.2 Markov Modeling..............................................................................................................7
1
6.2.1 Markov Modeling Example ................................................................................ 71
6.2.2 Model Builder .....................................................................................................72
6.2.3 Simulation and Sensitivity Analysis ................................................................... 73
6.3 Tool Design and Development .......................................................................................... 74
6.3.1 Tool Platform...................................................................................................... 75
6.3.2 Tool Integration................................................................................................... 75
6.3.3 CARMS Design .................................................................................................. 76
6.3.4 Reusable Ada Expert System.............................................................................. 77
6.3.5 CARMS / AES Interface.....................................................................................78
6.4 Markov Model Programming and Construction ............................................................... 79
6.4.1 Definitions and Guidelines ................................................................................. 79
v
6.4.2 Static Database.................................................................................................... 80
6.4.3 Generic Two-state Model.................................................................................... 80
6.4.4 Parallel System Parametric Model...................................................................... 81
6.4.5 Reliability Block Diagram Expansion ................................................................ 83
6.4.6 Fault Tree............................................................................................................ 85
6.4.7 Stochastic Petri Net............................................................................................. 87
6.4.8 Failure Modes and Effects Description............................................................... 89
6.4.9 Sensitivity Analysis and Constraint Checking.................................................... 90
6.4.10 Submodel Composition....................................................................................... 91
6.5 External Tool Integration .................................................................................................. 92
6.5.1 Automated Petri Net to VHDL Conversion........................................................ 92
6.5.2 Monte Carlo Simulation......................................................................................94
6.5.3 Other Methods .................................................................................................... 94
6.6 Hardware Acceleration...................................................................................................... 95
6.7 Extensions to the Environment ......................................................................................... 96
6.7.1 VHDL Example .................................................................................................. 97
6.8 Summary ........................................................................................................................... 97
7 Model Definition and Evaluation ..........................................................................99
7.1 Model Inputs and their Specification ................................................................................ 99
7.1.1 System-Level Goals.......................................................................................... 100
7.1.2 Performance ...................................................................................................... 100
7.1.3 Reliability.......................................................................................................... 103
7.2 Model Outputs................................................................................................................. 104
7.2.1 Performance ...................................................................................................... 104
7.2.2 Reliability.......................................................................................................... 106
7.2.3 Performability ................................................................................................... 106
7.2.4 Effectiveness ..................................................................................................... 107
7.3 Evaluation Architecture Description............................................................................... 107
7.3.1 Integrated Test Bed ...........................................................................................108
7.3.2 AARTS.............................................................................................................. 109
7.3.3 Other Advanced Architectures.......................................................................... 112
7.4 Evaluation Architecture Analysis.................................................................................... 112
7.4.1 Design Assessment using ADAS...................................................................... 113
7.4.2 Hardware Simulation ........................................................................................ 113
7.4.3 Engineering Model ........................................................................................... 114
7.4.4 Current Approach ............................................................................................. 115
7.4.5 Future Extensions ............................................................................................. 120
8 Complex System Optimization............................................................................122
8.1 An Overview of the Optimization Process...................................................................... 122
8.2 Trade-off Model ..............................................................................................................1
25
8.2.1 Probability Model ............................................................................................. 125
8.2.2 Performance Model........................................................................................... 126
8.2.3 Value (Utility) Model........................................................................................ 126
8.2.4 Cost Model........................................................................................................ 127
8.3 Trade-off Model Development........................................................................................ 127
8.3.1 System Partitioning........................................................................................... 128
8.3.2 Developing Trade-off Data Base ......................................................................128
8.3.3 Alternate Configuration Selection .................................................................... 129
8.3.4 Decision Model................................................................................................. 129
8.4 Optimization.................................................................................................................... 129
vi
8.5 Summary of Design Selection and Verification.............................................................. 132
8.6 Computer-Aided Optimization........................................................................................132
8.7 CEO-CARMS Tool Integration....................................................................................... 136
8.8 Final System Configuration Selection............................................................................. 136
9 Conclusions and Lessons Learned.......................................................................138
9.1 Conclusions ..................................................................................................................... 138
9.2 Lessons Learned..............................................................................................................1
40
9.2.1 Design Definition..............................................................................................140
9.2.2 Expert-System Based Tool Integration............................................................. 140
9.2.3 Reliability and Performance Modeling............................................................. 141
9.2.4 Tool Development............................................................................................. 141
10 Recommendations................................................................................................142
Appendix A CARMS User’s Guide.............................................................................144
A.1 Introduction .................................................................................................................... 144
A.2 Requirements................................................................................................................... 145
A.3 Application ...................................................................................................................... 145
A.3.1 The Method....................................................................................................... 145
A.3.2 Single-Element State Diagram.......................................................................... 148
A.3.3 Two-Element Parallel........................................................................................148
A.3.4 Two-Element Standby....................................................................................... 149
A.3.5 Single Element with Multiple Failure Modes................................................... 149
A.4 Installation....................................................................................................................... 150
A.4.1 CARMS Package .............................................................................................. 150
A.4.2 Distribution Disk............................................................................................... 150
A.4.3 Running the CARMS program ......................................................................... 150
A.4.4 Data Entry ......................................................................................................... 153
A.4.5 Step-by-Step Example ...................................................................................... 155
A.5 Algorithm Selection ........................................................................................................ 157
A.6 Markov Model and State Diagram Construction ............................................................ 157
A.7 Special Functions ............................................................................................................1
59
Appendix B CARMS Reference..................................................................................160
B.1 Keyboard ......................................................................................................................... 160
B.2 Mouse Translation To Keyboard..................................................................................... 160
B.3 Dialog Button Keys .........................................................................................................160
B.4 Commands....................................................................................................................... 160
B.4.1 File .................................................................................................................... 162
B.4.2 Options .............................................................................................................. 163
B.4.3 Report................................................................................................................ 165
B.4.4 View .................................................................................................................. 166
B.4.5 Diagram Draw or Diagram ............................................................................... 166
B.4.6 Transition Table or Table .................................................................................. 170
B.4.7 Simulation Control or Simulation..................................................................... 172
B.4.8 Names ............................................................................................................... 175
B.4.9 Miscellaneous ................................................................................................... 175
B.5 Definitions....................................................................................................................... 175
Appendix C Ada Expert System (AES).......................................................................177
C.1 Interactive AES User's Manual ....................................................................................... 177
vii
C.1.1 Using AES ........................................................................................................ 177
C.1.2 Prolog Syntax.................................................................................................... 180
C.2 Introduction to Prolog Programming .............................................................................. 184
C.3 Built-in Predicates ........................................................................................................... 188
C.3.1 Glossary of Built-ins ......................................................................................... 191
C.3.2 Non-standard built-ins ......................................................................................196
C.3.3 Debugging Facilities .........................................................................................196
C.3.4 Special Functionality ........................................................................................ 198
Appendix D Ada for MS-Windows Applications........................................................199
D.1 Dialog Box Interface ....................................................................................................... 200
D.2 Interapplication Messaging ............................................................................................. 202
D.3 Summary ......................................................................................................................... 205
Appendix E Automated Interface Code Generation from Ada Specifications............206
E.1 Problem Definition.......................................................................................................... 206
E.2 Example of Ada to Expert System Mapping................................................................... 208
E.3 Interface Generation Methods......................................................................................... 210
E.4 Summary ......................................................................................................................... 211
Appendix F Ada Tasking for Simulation Applications................................................213
F.1 Background ..................................................................................................................... 213
F.2 Example Design .............................................................................................................. 213
F.3 Example Program ............................................................................................................ 214
F.4 Ada Tasking Petri Net Simulation................................................................................... 216
Appendix G List of Acronyms.....................................................................................220
Appendix H References ...............................................................................................224
viii
List of Figures
Figure 1-1. Goals and Requirements.........................................................................................1
Figure 1-2. Design Reliability and Performance Evaluation ....................................................2
Figure 1-3. Fault Tolerance Impact on System Design.............................................................6
Figure 1-4. Redundancy Evaluator Prototype for ITB..............................................................7
Figure 1-5. Selected Architecture for this Study.......................................................................7
Figure 1-6. Conditional Fault Tree............................................................................................9
Figure 1-7. Markov Diagram ....................................................................................................9
Figure 2-1. VHDL and Tool Integration ................................................................................. 16
Figure 2-2. Waterfall Design Process...................................................................................... 19
Figure 2-3. Design Trade-offs.................................................................................................20
Figure 2-4. System Specification to System Evaluation......................................................... 21
Figure 2-5. Trade-off Studies in System Design..................................................................... 21
Figure 2-6. Trade-off Task Sequence ...................................................................................... 21
Figure 2-7. System Effectiveness Analysis Tasks................................................................... 23
Figure 2-8. System Effectiveness Definition .......................................................................... 26
Figure 2-9. Performability and System Effectiveness Computation....................................... 27
Figure 2-10. Model Interrelationships....................................................................................... 27
Figure 2-11. Resource Allocation in System Design ................................................................ 28
Figure 2-12. System Cost Optimization Model Interaction ...................................................... 28
Figure 2-13. Cost-Effectiveness Optimization Task ................................................................. 29
Figure 4-1. Tool Environment................................................................................................. 41
Figure 4-2. Specific Tool Environment Implementation ........................................................ 41
Figure 5-1. System Configuration........................................................................................... 52
Figure 5-2. Markov Diagram for a Triple with a Spare ..........................................................53
Figure 5-3. Markov Diagram with Recovery .......................................................................... 54
Figure 5-4. Approximate Semi-Markov Diagram...................................................................54
Figure 5-5. System Operational State Definition .................................................................... 57
Figure 5-6. Critical Failure State Mapping .............................................................................58
Figure 5-7. Failure Severity Mapping .....................................................................................58
Figure 5-8. Failure Classification Reduction ..........................................................................58
Figure 5-9. CPU Mapping.......................................................................................................59
Figure 5-10. Probability Computation Mapping.......................................................................60
Figure 5-11. Outcome Tree .......................................................................................................6
1
Figure 5-12. Truncated Outcome Tree ......................................................................................61
Figure 5-13. Expert System/Decision Module.......................................................................... 68
Figure 6-1. Intelligent Tool Prototype Configuration .............................................................69
Figure 6-2. Expert System Method Selection, and Rule Example.......................................... 69
Figure 6-3. Method Selection Steps ........................................................................................ 70
Figure 6-4. Translation selection............................................................................................. 71
Figure 6-5. a) RBD for 1-of-N System. b) Markov Model for 1-of-2 System....................... 72
Figure 6-6. Intelligent Tool Operational Modes..................................................................... 74
ix
Figure 6-7. Generic Tool Interface ..........................................................................................76
Figure 6-8. CARMS Interactive Environment ........................................................................77
Figure 6-9. DDE Communication Paths Through Application Objects.................................. 79
Figure 6-10. Markov Model for Four Parallel Components, at least 2 needed to work ........... 83
Figure 6-11. Reliability Block Diagram and its Success Path Diagram................................... 84
Figure 6-12. Manually-composed Success Path Description.................................................... 84
Figure 6-13. Automatically Generated Path Specification Using Nodes.................................. 85
Figure 6-14. Generated Markov Model.....................................................................................85
Figure 6-15. Standard Fault Tree Representation and CARMS Input ...................................... 86
Figure 6-16. Rulebase for Fault Tree ........................................................................................ 86
Figure 6-17. Markov State Diagram Generated From Rulebase............................................... 87
Figure 6-18. Petri Net Description of a Maintenance Model.................................................... 87
Figure 6-19. Automatically Generated ASSIST-like Description............................................. 88
Figure 6-20. Automatic Generation of Markov Model from SPN Description ........................88
Figure 6-21. Rulebase for a Failure Modes and Effects Description........................................89
Figure 6-22. Generated Model from the FME Description....................................................... 90
Figure 6-23. Executable Rulebase to Evaluate Several Failure Rates ...................................... 90
Figure 6-24. Rulebase to Generate Performability Measures on Two Submodels ...................91
Figure 6-25. Petri Net Representation of Bus Arbitration. ....................................................... 92
Figure 6-26. Automatically Generated VHDL from Petri Net.................................................. 93
Figure 6-27. Graphical and Textual Descriptions of a Queueing System................................. 94
Figure 6-28. Petri Net Diagram Translated to GSPN Simulator Format ..................................95
Figure 6-29. VHDL Specification for FMES............................................................................ 98
Figure 7-1. Functional Diagram of System. .......................................................................... 100
Figure 7-2. Communication Link Architecture.....................................................................102
Figure 7-3. Avionics Software Hierarchy Base on Mission Profile...................................... 103
Figure 7-4. Mission Phase/Time Profile................................................................................105
Figure 7-5. Pave Pillar Software Environment ..................................................................... 108
Figure 7-6. RRM Mission Manager and System Executive.................................................. 110
Figure 7-7. ITB Hardware Configuration ............................................................................. 111
Figure 7-8. Simulation and Prototyping for Behavioral and Dynamic Analysis .................. 116
Figure 7-9. Subsystem Template ...........................................................................................116
Figure 7-11. CARMS Markov Model of Cluster of 3 CPUs (itb_3par.mm) .......................... 117
Figure 7-10. ITB Model Factbase .......................................................................................... 118
Figure 7-12. System Computation Flow .................................................................................119
Figure 7-13. AES Prolog Executable ITB Model ...........