Passage-time computation and aggregation strategies for large semi-Markov processes

Department of Computing, Imperial College London, 180 Queen’s Gate, London SW7 2BZ, United Kingdom
Performance Evaluation (Impact Factor: 1.25). 03/2011; 68(3):221-236. DOI: 10.1016/j.peva.2010.10.003
Source: DBLP


High-level semi-Markov modelling paradigms such as semi-Markov stochastic Petri nets and process algebras are used to capture realistic performance models of computer and communication systems but often have the drawback of generating huge underlying semi-Markov processes. Extraction of performance measures such as steady-state probabilities and passage-time distributions therefore relies on sparse matrix–vector operations involving very large transition matrices. Previous studies have shown that exact state-by-state aggregation of semi-Markov processes can be applied to reduce the number of states. This can, however, lead to a dramatic increase in matrix density caused by the creation of additional transitions between remaining states. Our paper addresses this issue by presenting the concept of state space partitioning for aggregation. We present a new deterministic partitioning method which we term barrier partitioning. We show that barrier partitioning is capable of splitting very large semi-Markov models into a number of partitions such that first passage-time analysis can be performed more quickly and using up to 99% less memory than existing algorithms.

13 Reads
  • Source
    • "The biggest drawback is the limitation this imposes of having to hold the entire state-space of the model in the memory of one machine, whereas with HYDRA it is distributed across multiple machines. Our recent work on aggregation suggests ways in which the state-spaces of models could be reduced in size, however [25]. We will also investigate the benefits of exploiting Amazon's dedicated Elastic Block Store (EBS) to produce a diskbased tool [26], [27], [28]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Calculation of performance metrics such as steady-state probabilities and response time distributions in large Markov and semi-Markov models can be accomplished using parallel implementations of well-known numerical techniques. In the past these implementations have usually been run on dedicated computational clusters and networks of workstations, but the recent rise of cloud computing offers an alternative environment for executing such applications. It is important, however, to understand what effect moving to a cloud-based infrastructure will have on the performance of the analysis tools themselves. In this paper we investigate the scalability of two existing parallel performance analysis tools (one based on Laplace transform inversion and the other on uniformisation) on Amazon's Elastic Compute Cloud, and compare this with their performance on traditional dedicated hardware. This provides insight into whether such tools can be used effectively in a cloud environment, and suggests factors which must be borne in mind when designing next-generation performance tools specifically for the cloud.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: First-passage time densities and quantiles are important metrics in performance analysis. They are used in the analysis of mobile communication systems, web servers, manufacturing systems as well as for the analysis of the quality of service of hospitals and government organisations. In this report we look at computational techniques for the first-passage time analysis on high-level models that translate to Markov and semi-Markov processes. In particular we study exact first-passage time analysis on semi-Markov processes. Previous studies have shown that it is possible to analytically determine passage times by solving a large set of linear equations in Laplace space. The set of linear equations arises from the state transition graph of the Markov or semi-Markov process, which is usually derived from high-level models such as process algebras or stochastic Petri nets. The difficulty in passage time analysis is that even simple high-level models can produce large state transition graphs with several million states and transitions. These are difficult to analyse on modern hardware, because of limitations in the size of main memory. Whilst for Markov processes there exist several efficient techniques that allow the analysis of large chains with more than 100 million states, in the semi-Markov domain such techniques are still less developed. Consequently parallel passage time analyser tools currently only work on semi-Markov models with fewer than 50 million states. This study extends existing techniques and presents new approaches for state space reduction and faster first-passage time computation on large semi-Markov processes. We show that intelligent state space partitioning methods can reduce the amount of main memory needed for the evaluation of first-passage time distributions in large semi-Markov processes by up to 99% and decrease the runtime by a factor of up to 5 compared to existing semi-Markov passage time analyser tools. Finally we outline a new passage time analysis tool chain that has the potential to solve semi-Markov processes with more than 1 billion states on contemporary computer hardware.
  • [Show abstract] [Hide abstract]
    ABSTRACT: As a Novel Disaster surviving technology, Continuous Data Protection(CDP) can restore the protected system to the state of any time point in the past. Until now, no efficient survivability evaluation method for CDP system is developed. Regarding this problem, a semi-markov process(SMP) is applied to survivability of CDP, SMP model for CDP survivability analysis is established, quantitative survivability metric is calculated and some survivability enhancing strategies are proposed accordingly. © 2011
    Proceedings of SPIE - The International Society for Optical Engineering 10/2011; 8205:57-. DOI:10.1117/12.906088 · 0.20 Impact Factor

Similar Publications


13 Reads
Available from