[Show abstract][Hide abstract] ABSTRACT: With the increased integration of renewable energy sources the interaction between energy producers and consumers has become a bi-directional exchange. Therefore, the electrical grid must be adapted into a smart grid which effectively regulates this two-way interaction. With the aid of simulation, stakeholders can obtain information on how to properly develop and control the smart grid. In this paper, we present the development of an integrated smart grid simulation model, using the Anylogic simulation environment. Among the elements which are included in the simulation model are houses connected to a renewable energy source, and batteries as storage devices. With the use of the these elements a neighbourhood model can be constructed and simulated under multiple scenarios and configurations. The developed simulation environment provides users better insight into the effects of running different configurations in their houses as well as allow developers to study the inter-exchange of energy between elements in a smart city on multiple levels.
Full-text · Article · Nov 2015 · Electronic Notes in Theoretical Computer Science
[Show abstract][Hide abstract] ABSTRACT: Hypothesis testing is an important part of statistical model checking (SMC). It is typically used to verify statements of the form \(p > p_0\) or \(p < p_0\), where \(p\) is an unknown probability intrinsic to the system model and \(p_0\) is a given threshold value. Many techniques for this have been introduced in the SMC literature. We give a comprehensive overview and comparison of these techniques, starting by introducing a framework in which they can all be described. We distinguish between three classes of techniques, differing in what type of output correctness guarantees they give when the true \(p\) is very close to the threshold \(p_0\). For each technique, we show how to parametrise it in terms of quantities that are meaningful to the user. Having parametrised them consistently, we graphically compare the boundaries of their decision thresholds, and numerically compare the correctness, power and efficiency of the tests. A companion website allows users to get more insight in the properties of the tests by interactively manipulating the parameters.
No preview · Article · Aug 2015 · International Journal on Software Tools for Technology Transfer
[Show abstract][Hide abstract] ABSTRACT: The infrastructures used in cities to supply power, water and gas are consistently becoming more automated. As society depends critically on these cyber-physical infrastructures, their survivability assessment deserves more attention. In this overview, we first touch upon a taxonomy on survivability of cyber-physical infrastructures, before we focus on three classes of infrastructures (gas, water and electricity) and discuss recent modelling and evaluation approaches and challenges.
Full-text · Article · Jan 2015 · Electronic Notes in Theoretical Computer Science
[Show abstract][Hide abstract] ABSTRACT: Waste water treatment facilities clean sewage water from households and industry in several cleaning steps. Such facilities are dimensioned to accommodate a maximum intake. However, in the case of very bad weather conditions or failures of system components the system might not suffice to accommodate all waste water. This paper models a real waste water treatment facility, situated in the city of Enschede, The Netherlands, as Hybrid Petri net with a single general one-shot transition (HPnGs) and analyses under which circumstances the existing infrastructure will overflow. This required extending the HPnG formalism with and to model dependencies both on continuous places and on the rate of continuous transitions. Using recent algorithms for model checking STL properties on HPnGs, the paper computes survivability measures that can be expressed using the path-based until operator. After computing measures for a wide range of parameters, we provide recommendations as to where the system can be improved to reduce the probability of overflow.
[Show abstract][Hide abstract] ABSTRACT: The next generation power grid (the “Smart Grid”) aims to minimize environmental impact, enhance markets, improve reliability and service, and reduce costs and improve efficiency of electricity distribution. One of the main protocol frameworks used in Smart Grids is IEC 61850. Together with the Manufacturing Message Specification (MMS) protocol, IEC 61850 ensures interoperability within the Smart Grid by standardizing the data models and services to support Smart Grid communications, most notably, smart metering and remote control. Long Term Evolution (LTE) is a fourth-generation (4G) cellular communications standard that provides high-capacity, low-latency, secure and reliable data-packet switching. This paper investigates whether LTE can be used in combination with IEC 61850 and MMS to support smart metering and remote control communications at a desirable quality of service level. Using ns-3 simulation models, it is shown that LTE can indeed satisfy the main IEC 61850 and MMS performance requirements for these two applications.
[Show abstract][Hide abstract] ABSTRACT: In this paper we propose a formal, model-checking based procedure to evaluate the survivability of fluid critical infrastructures. To do so, we introduce the Stochastic Time Logic (STL), which allows to precisely express intricate state-based and until-based properties for an important class of hybrid Petri nets. We present an efficient model checking procedure which recursively traverses the underlying state-space of the hybrid Petri net model, and identifies those regions (subsets of the discrete-continuous state space) that satisfy STL formulae. A case study studying the survivability of a water refinery and distribution plant shows the feasibility of our approach.
[Show abstract][Hide abstract] ABSTRACT: Over the last decade we have witnessed an increasing use of data processing
in embedded systems. Where in the past the data processing was limited (if
present at all) to the handling of a small number of "on-off control signals",
more recently much more complex sensory data is being captured, processed and
used to improve system performance and dependability. The advent of
systems-of-systems aggravates the use of more and more data, for instance, by
bringing together data from several independent sources, allowing, in
principle, for even better performing systems. However, this ever stronger
data-orientation brings along several challenges in system design, both
technically and organisationally, and also forces manufacturers to think beyond
their traditional field of expertise. In this short paper, I will address these
new design challenges, through a number of examples. The paper finishes with
concrete challenges for supporting tools and techniques for system design in
this new context.
[Show abstract][Hide abstract] ABSTRACT: Since the introduction by John F. Meyer in 1980, various algorithms have been proposed to evaluate the performability distribution. In this paper we describe and compare five algorithms that have been proposed recently to evaluate this
distribution: Picard's method, a uniformisation-based method, a path-exploration method, a discretisation approach and a fully Markovian approximation.
As a result of our study, we recommend Picard's method not to be used (due to numerical stability problems). Furthermore, the path exploration method turns out to be heavily dependent on the branching structure of the Markov-reward model under study. For small models, the uniformisation method is preferable; however, its complexity is such that it is impractical for larger models. The discretisation method performs well, also for larger models; however, it does not easily apply in all cases. The recently proposed Markovian approximation works best, even for large models; however, error bounds cannot be given for it.
[Show abstract][Hide abstract] ABSTRACT: Systems of systems are becoming more prevalent and more critical to industry and society. Designing these systems is difficult; designing them to be dependable is an even greater challenge. However, there are ways to ease this process.
No preview · Article · Sep 2013 · IEEE Security and Privacy Magazine
[Show abstract][Hide abstract] ABSTRACT: This paper gives a bird's-eye view of the various ingredients that make up a modern, model-checking-based approach to performability evaluation: Markov reward models, temporal logics and continuous stochastic logic, model-checking algorithms, bisimulation and the handling of non-determinism. A short historical account as well as a large case study complete this picture. In this way, we show convincingly that the smart combination of performability evaluation with stochastic model-checking techniques, developed over the last decade, provides a powerful and unified method of performability evaluation, thereby combining the advantages of earlier approaches.
Full-text · Article · Aug 2013 · Mathematical Structures in Computer Science
[Show abstract][Hide abstract] ABSTRACT: Recently, hybrid Petri nets with a single general one-shot transition (HPnGs) have been introduced together with an algorithm to analyze their underlying state space using a conditioning/deconditioning approach. In this paper we propose a considerably more efficient algorithm for analysing HPnGs. The proposed algorithm maps the underlying state-space onto a plane for all possible firing times of the general transition s and for all possible systems times t. The key idea of the proposed method is that instead of dealing with infinitely many points in the t-s-plane, we can partition the state space into several regions, such that all points inside one region are associated with the same system state. To compute the probability to be in a specific system state at time τ, it suffices to find all regions intersecting the line t=τ and decondition the firing time over the intersections. This partitioning results in a considerable speed-up and provides more accurate results. A scalable case study illustrates the efficiency gain with respect to the previous algorithm.
[Show abstract][Hide abstract] ABSTRACT: In this paper we will describe the SoftArc approach. With the SoftArc approach it is possible to model and analyse safety-critical embedded and distributed systems that consist of both hard- and software. We are going to present the SoftArc modelling language, its syntax and semantics. The semantics of the SoftArc modelling language is defined in terms of stochastic reactive modules. We will show how important measures of interest for probabilistic dependability analysis like availability, unavailability, and survivability, can be analysed. We will demonstrate the feasibility of our approach by means of two case studies, that involve hard- and software elements. First, we are presenting two industrial case studies from the automotive industry. We will analyse the non volatile random access manager (NVRAM) from the AUTOSAR open system architecture, Second, we are going to present the survivability analysis of a simplified version of the Google replicated file system. 1
[Show abstract][Hide abstract] ABSTRACT: The Analytical Software Design (ASD) method of the company Verum has been designed to reduce the number of errors in embedded software. However, it does not take performance issues into account, which can also have a major impact on the duration of software development. This paper presents a discrete-event simulator for the performance evaluation of ASD-structured software as well as a compositional numerical analysis method using fixed-point iteration and phase-type distribution fitting. Whereas the numerical analysis is highly accurate for non-interfering tasks, its accuracy degrades when tasks run in opposite directions through interdependent software blocks and the utilization increases. A thorough validation identifies the underlying problems when analyzing the performance of embedded software.
[Show abstract][Hide abstract] ABSTRACT: Inspired by applications in the context of stochastic model checking, we are interested in using simulation for estimating the probability of reaching a specific state in a Markov chain after a large amount of time tau has passed. Since this is a rare event, we apply importance sampling. We derive approximate expressions for the sojourn times on a given path in a Markov chain conditional on the sum exceeding tau, and use those expressions to construct a change of measure. Numerical examples show that this change of measure performs very well, leading to high precision estimates in short simulation times.
[Show abstract][Hide abstract] ABSTRACT: Recently the mean-field method has been adopted for analysing systems consisting of a large number of interacting objects in computer science, biology, chemistry, etc. It allows for a quick and accurate analysis of such systems, while avoiding the state-space explosion problem. So far, the method has primarily been used for performance evaluation. In this paper, we use the mean-field method for model-checking. We define and motivate a logic MF-CSL for describing properties of systems composed of many identical interacting objects. The proposed logic allows describing both properties of the overall system and of a random individual object. Algorithms to check the satisfaction relation for all MF-CSL operators are proposed. Furthermore, we explain how the set of all time instances that fulfill a given MF-CSL formula for a certain distribution of objects can be computed.
[Show abstract][Hide abstract] ABSTRACT: Peer-to-peer botnets, as exemplified by the Storm Worm and Stuxnet, are a relatively new threat to security on the internet: infected computers automatically search for other computers to be infected, thus spreading the infection rapidly. In a recent paper, such botnets have been modeled using Stochastic Activity Networks, allowing the use of discrete-event simulation to judge strategies for combating their spread. In the present paper, we develop a mean-field model for analyzing botnet behavior and compare it with simulations obtained from the Möbius tool. We show that the mean-field approach provides accurate and orders-of- magnitude faster computation, thus providing very useful insight in spread characteristics and the effectiveness of countermeasures.
[Show abstract][Hide abstract] ABSTRACT: In this short paper I will address the question whether the methods and techniques we develop are applied well in industrial practice. To address this question, I will make a few observations from the academic field, as well as from industrial practice. This will be followed by a concise analysis of the cause of the perceived gap between the academic state-of-the-art and industrial practice. I will conclude with some opportunities for improvement.
[Show abstract][Hide abstract] ABSTRACT: This paper presents an adaptive resource control mechanism for multihop ad-hoc network systems, which avoids bottleneck problems caused by the node-fairness property of IEEE 802.11. In our proposal, the feedback information from the downstream bottleneck, derived from Request-To-Send (RTS) and Clear-To-Send (CTS) messages is utilized to control the Transmission Opportunity (TXOP) limit of the upstream nodes for traffic balancing. The proposed mechanism is modelled control-theoretically using the 20-sim control system modelling tool, which has the advantage that results can be obtained in a fast and efficient way. Compared to systems without resource control, a higher throughput and lower delay can be achieved under a variety of traffic load conditions as well as in dynamic network environments.
[Show abstract][Hide abstract] ABSTRACT: Gossip protocols are designed to operate in very large, decentralised networks. A node in such a network bases its decision to interact (gossip) with another node on its partial view of the global system. Because of the size of these networks, analysis of gossip protocols is mostly done using simulations, but these tend to be expensive in computation time and memory consumption.We employ mean-field analysis techniques for the evaluation of gossip protocols. Nodes in the network are represented by small identical stochastic processes. Joining all nodes would result in an enormous stochastic process. If the number of nodes goes to infinity, however, mean-field analysis allows us to replace this intractably large stochastic process by a small deterministic process. This process approximates the behaviour of very large gossip networks, and can be evaluated using simple matrix-vector multiplications.
Full-text · Article · Feb 2011 · Performance Evaluation