[Show abstract][Hide abstract] ABSTRACT: The infrastructures used in cities to supply power, water and gas are consistently becoming more automated. As society depends critically on these cyber-physical infrastructures, their survivability assessment deserves more attention. In this overview, we first touch upon a taxonomy on survivability of cyber-physical infrastructures, before we focus on three classes of infrastructures (gas, water and electricity) and discuss recent modelling and evaluation approaches and challenges.
Electronic Notes in Theoretical Computer Science 01/2015; 310. DOI:10.1016/j.entcs.2014.12.010
[Show abstract][Hide abstract] ABSTRACT: Waste water treatment facilities clean sewage water from households and industry in several cleaning steps. Such facilities are dimensioned to accommodate a maximum intake. However, in the case of very bad weather conditions or failures of system components the system might not suffice to accommodate all waste water. This paper models a real waste water treatment facility, situated in the city of Enschede, The Netherlands, as Hybrid Petri net with a single general one-shot transition (HPnGs) and analyses under which circumstances the existing infrastructure will overflow. This required extending the HPnG formalism with and to model dependencies both on continuous places and on the rate of continuous transitions. Using recent algorithms for model checking STL properties on HPnGs, the paper computes survivability measures that can be expressed using the path-based until operator. After computing measures for a wide range of parameters, we provide recommendations as to where the system can be improved to reduce the probability of overflow.
7th International Conference on Performance Evaluation Methodologies and Tools; 01/2014
[Show abstract][Hide abstract] ABSTRACT: The next generation power grid (the “Smart Grid”) aims to minimize environmental impact, enhance markets, improve reliability and service, and reduce costs and improve efficiency of electricity distribution. One of the main protocol frameworks used in Smart Grids is IEC 61850. Together with the Manufacturing Message Specification (MMS) protocol, IEC 61850 ensures interoperability within the Smart Grid by standardizing the data models and services to support Smart Grid communications, most notably, smart metering and remote control. Long Term Evolution (LTE) is a fourth-generation (4G) cellular communications standard that provides high-capacity, low-latency, secure and reliable data-packet switching. This paper investigates whether LTE can be used in combination with IEC 61850 and MMS to support smart metering and remote control communications at a desirable quality of service level. Using ns-3 simulation models, it is shown that LTE can indeed satisfy the main IEC 61850 and MMS performance requirements for these two applications.
Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance, 01/2014: pages 225-239;
[Show abstract][Hide abstract] ABSTRACT: In this paper we propose a formal, model-checking based procedure to evaluate the survivability of fluid critical infrastructures. To do so, we introduce the Stochastic Time Logic (STL), which allows to precisely express intricate state-based and until-based properties for an important class of hybrid Petri nets. We present an efficient model checking procedure which recursively traverses the underlying state-space of the hybrid Petri net model, and identifies those regions (subsets of the discrete-continuous state space) that satisfy STL formulae. A case study studying the survivability of a water refinery and distribution plant shows the feasibility of our approach.
Proceedings of the 2013 IEEE 19th Pacific Rim International Symposium on Dependable Computing; 12/2013
[Show abstract][Hide abstract] ABSTRACT: Since the introduction by John F. Meyer in 1980, various algorithms have been proposed to evaluate the performability distribution. In this paper we describe and compare five algorithms that have been proposed recently to evaluate this
distribution: Picard's method, a uniformisation-based method, a path-exploration method, a discretisation approach and a fully Markovian approximation.
As a result of our study, we recommend Picard's method not to be used (due to numerical stability problems). Furthermore, the path exploration method turns out to be heavily dependent on the branching structure of the Markov-reward model under study. For small models, the uniformisation method is preferable; however, its complexity is such that it is impractical for larger models. The discretisation method performs well, also for larger models; however, it does not easily apply in all cases. The recently proposed Markovian approximation works best, even for large models; however, error bounds cannot be given for it.
[Show abstract][Hide abstract] ABSTRACT: This paper gives a bird's-eye view of the various ingredients that make up a modern, model-checking-based approach to performability evaluation: Markov reward models, temporal logics and continuous stochastic logic, model-checking algorithms, bisimulation and the handling of non-determinism. A short historical account as well as a large case study complete this picture. In this way, we show convincingly that the smart combination of performability evaluation with stochastic model-checking techniques, developed over the last decade, provides a powerful and unified method of performability evaluation, thereby combining the advantages of earlier approaches.
[Show abstract][Hide abstract] ABSTRACT: Recently, hybrid Petri nets with a single general one-shot transition (HPnGs) have been introduced together with an algorithm to analyze their underlying state space using a conditioning/deconditioning approach. In this paper we propose a considerably more efficient algorithm for analysing HPnGs. The proposed algorithm maps the underlying state-space onto a plane for all possible firing times of the general transition s and for all possible systems times t. The key idea of the proposed method is that instead of dealing with infinitely many points in the t-s-plane, we can partition the state space into several regions, such that all points inside one region are associated with the same system state. To compute the probability to be in a specific system state at time τ, it suffices to find all regions intersecting the line t=τ and decondition the firing time over the intersections. This partitioning results in a considerable speed-up and provides more accurate results. A scalable case study illustrates the efficiency gain with respect to the previous algorithm.
Proceedings of the 10th international conference on Formal Modeling and Analysis of Timed Systems; 09/2012
[Show abstract][Hide abstract] ABSTRACT: In this paper we will describe the SoftArc approach. With the SoftArc approach it is possible to model and analyse safety-critical embedded and distributed systems that consist of both hard- and software. We are going to present the SoftArc modelling language, its syntax and semantics. The semantics of the SoftArc modelling language is defined in terms of stochastic reactive modules. We will show how important measures of interest for probabilistic dependability analysis like availability, unavailability, and survivability, can be analysed. We will demonstrate the feasibility of our approach by means of two case studies, that involve hard- and software elements. First, we are presenting two industrial case studies from the automotive industry. We will analyse the non volatile random access manager (NVRAM) from the AUTOSAR open system architecture, Second, we are going to present the survivability analysis of a simplified version of the Google replicated file system. 1
[Show abstract][Hide abstract] ABSTRACT: The Analytical Software Design (ASD) method of the company Verum has been designed to reduce the number of errors in embedded software. However, it does not take performance issues into account, which can also have a major impact on the duration of software development. This paper presents a discrete-event simulator for the performance evaluation of ASD-structured software as well as a compositional numerical analysis method using fixed-point iteration and phase-type distribution fitting. Whereas the numerical analysis is highly accurate for non-interfering tasks, its accuracy degrades when tasks run in opposite directions through interdependent software blocks and the utilization increases. A thorough validation identifies the underlying problems when analyzing the performance of embedded software.
[Show abstract][Hide abstract] ABSTRACT: Inspired by applications in the context of stochastic model checking, we are interested in using simulation for estimating the probability of reaching a specific state in a Markov chain after a large amount of time tau has passed. Since this is a rare event, we apply importance sampling. We derive approximate expressions for the sojourn times on a given path in a Markov chain conditional on the sum exceeding tau, and use those expressions to construct a change of measure. Numerical examples show that this change of measure performs very well, leading to high precision estimates in short simulation times.
[Show abstract][Hide abstract] ABSTRACT: Recently, many systems consisting of a large number of interacting objects were analysed using the mean-field method, which has only been used for performance evaluation. In this short paper, we apply it to model checking. We define logic, which allows to describe the overall properties of the large system.
[Show abstract][Hide abstract] ABSTRACT: This paper studies quantitative model checking of infinite tree-like (continuous-time) Markov chains. These tree-structured quasi-birth death processes are equivalent to probabilistic pushdown automata and recursive Markov chains and are widely used in the field of performance evaluation. We determine time-bounded reachability probabilities in these processes–which with direct methods, i.e., uniformization, result in an exponential blow-up–by applying abstraction. We contrast abstraction based on Markov decision processes (MDPs) and interval-based abstraction; study various schemes to partition the state space, and empirically show their influence on the accuracy of the obtained reachability probabilities. Results show that grid-like schemes, in contrast to chain- and tree-like ones, yield extremely precise approximations for rather coarse abstractions.
[Show abstract][Hide abstract] ABSTRACT: Gossip protocols are designed to operate in very large, decentralised networks. A node in such a network bases its decision to interact (gossip) with another node on its partial view of the global system. Because of the size of these networks, analysis of gossip protocols is mostly done using simulations, but these tend to be expensive in computation time and memory consumption.We employ mean-field analysis techniques for the evaluation of gossip protocols. Nodes in the network are represented by small identical stochastic processes. Joining all nodes would result in an enormous stochastic process. If the number of nodes goes to infinity, however, mean-field analysis allows us to replace this intractably large stochastic process by a small deterministic process. This process approximates the behaviour of very large gossip networks, and can be evaluated using simple matrix-vector multiplications.
[Show abstract][Hide abstract] ABSTRACT: Bounded model checking (BMC) is an incremental refutation technique to search for counterexamples of increasing length. The existence of a counterexample of a fixed length is expressed by a first-order logic formula that is checked for satisfiability ...
Journal of Logic and Computation 01/2011; 21(1):1-3. DOI:10.1093/logcom/exp001 · 0.50 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Peer-to-peer botnets, as exemplified by the Storm Worm and Stuxnet, are a relatively new threat to security on the internet: infected computers automatically search for other computers to be infected, thus spreading the infection rapidly. In a recent paper, such botnets have been modeled using Stochastic Activity Networks, allowing the use of discrete-event simulation to judge strategies for combating their spread. In the present paper, we develop a mean-field model for analyzing botnet behavior and compare it with simulations obtained from the Möbius tool. We show that the mean-field approach provides accurate and orders-of- magnitude faster computation, thus providing very useful insight in spread characteristics and the effectiveness of countermeasures.
Computer Performance Engineering - 8th European Performance Engineering Workshop, EPEW 2011, Borrowdale, UK, October 12-13, 2011. Proceedings; 01/2011
[Show abstract][Hide abstract] ABSTRACT: The use of mobile devices is often limited by the lifetime of its battery. For devices that have multiple batteries or that have the option to connect an extra battery, battery scheduling, thereby exploiting the recovery properties of the batteries, can help to extend the system lifetime. Due to the complexity, work on battery scheduling in the literature is limited to either small batteries or to very simple loads. In this paper, we present an approach using the Kinetic Battery Model that combines real-size batteries with realistic random loads. The results show that, indeed, battery scheduling results in lifetime improvements compared to the sequential usage of the batteries. The improvements mainly depend on the ratio between the average discharge current and the battery capacity.
Our results show that for realistic loads one can achieve up to 20% improvements in system lifetime by applying battery scheduling.
[Show abstract][Hide abstract] ABSTRACT: This paper presents an adaptive resource control mechanism for multihop ad-hoc network systems, which avoids bottleneck problems caused by the node-fairness property of IEEE 802.11. In our proposal, the feedback information from the downstream bottleneck, derived from Request-To-Send (RTS) and Clear-To-Send (CTS) messages is utilized to control the Transmission Opportunity (TXOP) limit of the upstream nodes for traffic balancing. The proposed mechanism is modelled control-theoretically using the 20-sim control system modelling tool, which has the advantage that results can be obtained in a fast and efficient way. Compared to systems without resource control, a higher throughput and lower delay can be achieved under a variety of traffic load conditions as well as in dynamic network environments.
Wired/Wireless Internet Communications - 9th IFIP TC 6 International Conference, WWIC 2011, Vilanova i la Geltrú, Spain, June 15-17, 2011. Proceedings; 01/2011
[Show abstract][Hide abstract] ABSTRACT: In this short paper I will address the question whether the methods and techniques we develop are applied well in industrial practice. To address this question, I will make a few observations from the academic field, as well as from industrial practice. This will be followed by a concise analysis of the cause of the perceived gap between the academic state-of-the-art and industrial practice. I will conclude with some opportunities for improvement.
Formal Modeling and Analysis of Timed Systems - 9th International Conference, FORMATS 2011, Aalborg, Denmark, September 21-23, 2011. Proceedings; 01/2011