Techniques for the Fast Simulation of Models of Highly Dependable Systems

Dept. of Electr. Eng., Twente Univ., Enschede
IEEE Transactions on Reliability (Impact Factor: 1.93). 10/2001; 50(3):246 - 264. DOI: 10.1109/24.974122
Source: IEEE Xplore


With the ever-increasing complexity and requirements of highly
dependable systems, their evaluation during design and operation is
becoming more crucial. Realistic models of such systems are often not
amenable to analysis using conventional analytic or numerical methods.
Therefore, analysts and designers turn to simulation to evaluate these
models. However, accurate estimation of dependability measures of these
models requires that the simulation frequently observes system failures,
which are rare events in highly dependable systems. This renders
ordinary Simulation impractical for evaluating such systems. To overcome
this problem, simulation techniques based on importance sampling have
been developed, and are very effective in certain settings. When
importance sampling works well, simulation run lengths can be reduced by
several orders of magnitude when estimating transient as well as
steady-state dependability measures. This paper reviews some of the
importance-sampling techniques that have been developed in recent years
to estimate dependability measures efficiently in Markov and nonMarkov
models of highly dependable systems

Download full-text


Available from: Victor F. Nicola, Oct 16, 2013
  • Source
    • "Sampling may be from parametric distributions, or by bootstrap resampling from observed data; see (Cheng, 1995; Zio, 2009) for details and references. The number of simulated iterations required for stable results may be very large, depending on the probability of the passage of interest (Law & Kelton, 2000; Nicola, Shahabuddin & Nakayama, 2001). For example, consider the process in figure 3, a system in which a component, subject to wearing out, can with probability p be immediately replaced with a spare. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Two-state models (working/failed or alive/dead) are widely used in reliability and survival anal-ysis. In contrast, multistate stochastic processes provide a richer framework for modeling and ana-lyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multistate models, focusing on time-homogeneous semi-Markov processes (SMPs), and then describe the statistical flowgraph framework, which com-prises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral trans-forms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several appli-cations.
    Full-text · Article · Apr 2013 · International Statistical Review
  • Source
    • "As system failure is a rare event in these models, crude simulations require prohibitively long execution times to produce precise estimates. Several techniques that use the method of importance sampling have been developed to force the system to fail more frequently within a simulation experiment , see [1] and [2] for an overview of these methods. The other important method for rare event simulation is RESTART (Repetitive Simulation Trials After Reaching Thresholds). "
    [Show abstract] [Hide abstract]
    ABSTRACT: RESTART is a widely applied accelerated simulation technique that allows the evaluation of very low probabilities. In this method a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of the rare event is higher. Formulas for evaluating the optimal number of regions and retrials have been provided in previous papers. Guidelines were also provided for obtaining a suitable function, the importance function, used to define the regions. This paper provides a simple importance function that can be useful for RESTART simulation of models of many highly dependable systems. Some eXamples from the literature illustrate the application of this importance function. Steady-state unavailability of balanced systems is accurately estimated within short computational times, and also the unavailability of an unbalanced system but with much more computational effort.
    Preview · Article · Dec 2007 · SIMULATION: Transactions of The Society for Modeling and Simulation International
  • Source
    • "First, simulation produces results that are just an approximation of the real ones. A confidence interval is used to characterize the accuracy of the results [12] [14] [18]. Second, in many situations the results are obtained through the use of exigent computational resources (CPU time and memory). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper proposes a dependability model which enables the evaluation of Profibus-DP networks in scenarios of transient faults which affect data communications. The full behavior of Profibus-DP communication stack is modeled, including: cyclic process data exchange between master and slave stations; configuration, parameterization and diagnostics of slave stations. The model is based on a high level stochastic Petri net formalism referred as stochastic activity networks (SAN), supported by the Mobius tool. High modeling power, state-of-the-art analytical and simulation solutions, and a flexible and integrated environment are their main features. Dependability measures are established from the fulfillment of the real-time constraints (deadlines) defined on process data messages exchanged between master and slave stations. The reward concept is used to define the measures, which are obtained by means of a simulation approach. A case study is proposed to assess the model performance
    Full-text · Conference Paper · Oct 2005
Show more