Article

Resilient Monitoring in Self-Adaptive Systems through Behavioral Parameter Estimation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cyber-physical systems need self-adaptation as a mean to autonomously deal with changes. For runtime adaptation, a cyber-physical system repeatedly monitors the environment for detecting possible changes. Faults in the monitoring devices due to the dynamic and uncertain environment is very likely, necessitating resilient monitoring. In this paper, we discuss imperfect monitoring in self-adaptive systems, and propose a model-driven methodology to represent the self-adaptive system using a parametric Markov decision process, where the changes are reflected by a set of model parameters. Fault in the monitoring device may result in some parameter valuation miss. We propose a comprehensive framework for parameter estimation using behavioral patterns of the system by a pattern-matching component. The proposed method simulates the current behavior of the system using random walk patterns, and matches it with a history of patterns to estimate the omitted data. The results show an accuracy of 94% under imperfect monitoring. In addition, we elaborate a set of theoretical proofs to support error analysis, and determine a certain upper-bound of error to guarantee an accurate decision-making process. We establish a logical connection between the error and the accuracy of decisions, and introduce tolerable error metric to guarantee the accuracy of decisions under estimation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A set of parameters are set as an indication for system changes, and a parametric MDP (pMDP) [16] is the way to capture occurring changes. This research employs a case study about energy-harvesting that utilizes a MAPE-K loop for adaptation [17]. MAPE-K stands for monitoring, analyzing, planning, executing, and knowledge, and we concentrate on exploring the different phases of the MAPE-K loop. ...
... A self-adaptive solar energy harvesting system is a case study consisting of environmental and local parts, like many other self-adaptive systems. As indicated in Figure 1-(c), the environment model is in charge of capturing how much energy can be attained from the environmental domain such that each state specifies the expected harvested energy on an hourly basis [17]. As shown in Figures 1-(a) and (b), a sensor network with a central solar battery is the local part responsible for maintaining the harvested energy. ...
... Then, we use a parametric MDP (pMDP) [8] for capturing the changes. In this research, we use a case study on an energy-harvesting system that uses a MAPE-K loop for adaptation purposes [9]. MAPE-K stands for monitoring, analyzing, planning, executing, and knowledge. ...
... The case study is a self-adaptive solar energy harvesting system which is consists of environmental and local parts. The environment model is responsible for capturing how much energy can be harvested as denoted in Figure 1-(c) in which each state determines the expected energy harvesting on an hourly basis [9]. The local part is a sensor network with a central battery to save the harvested energy denoted by Figure 1-(a) and (b) respectively. ...
Preprint
Full-text available
The autonomous systems need to decide how to react to the changes at runtime efficiently. The ability to rigorously analyze the environment and the system together is theoretically possible by the model-driven approaches; however, the model size and timing limitations are two significant obstacles against such an autonomous decision-making process. To tackle this issue, the incremental approximation technique can be used to partition the model and only verify a partition if it is affected by the change. This paper proposes a policy-based analysis approach that finds the best partitioning policy among a set of available policies based on two proposed metrics, namely Balancing and Variation. The metrics quantitatively evaluate the generated components from the incremental approximation scheme according to their size and frequency. We investigate the validity of the approach both theoretically and experimentally via a case study on energy harvesting systems. The results confirm the effectiveness of the proposed approach.
Thesis
Full-text available
Self-adaptive systems provide the ability of autonomous decision-making for handling the changes affecting the functionalities of cyber-physical systems. A self-adaptive system repeatedly monitors and analyzes the local system and the environment and makes significant decisions regarding fulfilling the system's functional optimization and safety requirements. Such a decision must be made before a deadline, and the autonomy helps the system meet the timing constraints. Suppose the model of the cyber-physical system is available. In that case, it can be used for verification against specific formal properties to reveal whether the system is committed to the properties or not. However, according to the dynamicity of such systems, the system model needs to be reconstructed and reverified at runtime. As the model of the self-adaptive systems is a composition of the local system and the environment models, the size of the composed model is relatively large. Therefore, we need efficient and scalable methods to verify the model at runtime in resource-constrained systems. Since the physical environment and the cyber part of the system usually have stochastic natures, the reflection of each behavior is modeled through probabilistic parameters, which we have some predictions about them. If the system observes or predicts some changes in the behavior of the environment or the local system, the parameter(s) are updated. This research focuses on the problem of runtime model size reduction in self-adaptive systems. As a solution, the model is partitioned into sub-models that can be verified/approximated independently. At runtime, if a change occurs, only the affected sub-models are subject to re-verification/re-approximation. Finally, with the help of an aggregation algorithm, the partial results from the sub-models are composed, and the verification result for the whole model is calculated. In some situations, updating the model may cause some delays in the decision-making. The self-adaptive system must decide about an incomplete model when a few parameters have been missed to meet the decision-making deadlines. We do this by conducting a set of behavioral simulations by random walk and matching the system's current behavior with its previous behavioral patterns. Thus, the system is equipped with a runtime parameter estimation method respecting a certain upper bound of errors. This thesis proposes a new metric for determining an upper bound of errors caused by applying the approximation technique. The metric is the basis for two proposed theorems that guarantee upper bounds of errors and accuracy of runtime verification. The evaluation results confirm that the proposed approximation framework reduces the model's size and helps decision-making within the time restrictions. The framework keeps the accuracy of the parameter estimations and verification results upper than 96.5% and 95%, respectively, while fully guaranteeing the system's safety.
Article
Full-text available
Recent advances in sensor technologies and data acquisition systems opened up the era of big data in the field of structural health monitoring (SHM). Data-driven methods based on statistical pattern recognition provide outstanding opportunities to implement a long-term SHM strategy, by exploiting measured vibration data. However, their main limitation, due to big data or high-dimensional features, is linked to the complex and time-consuming procedures for feature extraction and/or statistical decision-making. To cope with this issue, in this article we propose a strategy based on autoregressive moving average (ARMA) modeling for feature extraction, and on an innovative hybrid divergence-based method for feature classification. Data relevant to a cable-stayed bridge are accounted for to assess the effectiveness and efficiency of the proposed method. The results show that the offered hybrid divergence-based method, in conjunction with ARMA modeling, succeeds in detecting damage in cases strongly characterized by big data.
Conference Paper
Full-text available
Ubiquitous and perpetual nature of cyber-physical systems (CPSs) have made them mostly battery-operated in many applications. The batteries need recharge via environmental energy sources. Solar energy harvesting is a conventional source for CPSs, whereas it is not perfectly predictable due to environmental changes. Thus, the system needs to adaptively control its consumption with respect to the energy harvesting. In this paper, we propose a model-driven approach for analyzing self-adaptive solar energy harvesting systems; it uses a feedback control loop to monitor and analyze the behavior of the system and the environment, and decides which adaptation action must be triggered against the changes. We elaborate a data-driven method to come up with the prediction of the incoming changes, especially those from the environment. The method takes the energy harvesting data for prediction purposes, and models the environment as a Markov chain. We empower the proposed system against the runtime monitoring faults as well. In this regard, the system is able to verify an incomplete model, i.e. when some data is missed. To this aim, we propose a pattern-matching system that simulates the current behavior of the system using random walk, and matches it with the history to estimate the omitted data. The results show an accuracy of at least 96% when decisions are made by imperfect monitoring.
Article
Full-text available
Cyber-physical systems (CPS) are expected to continuously monitor the physical components to autonomously calculate appropriate runtime reactions to deal with the uncertain environmental conditions. Self-adaptation, as a promising concept to fulfil a set of provable rules, majorly needs runtime quantitative verification (RQV). Taking a few probabilistic variables into account to represent the uncertainties, the system configuration will be extremely large. Thus, efficient approaches are needed to reduce the model state-space, preferably with certain bounds on the approximation error. In this paper, we propose an approximation framework to efficiently approximate the entire model of a self-adaptive system. We split up the large model into strongly-connected components (SCCs), apply the approximation algorithm separately on each SCC, and integrate the result of each part using a centralized algorithm. Due to a number of changes in probabilistic variables, it is not possible to use static models. Addressing this issue, we have deployed parametric Markov decision process. In order to apply approximation on the model, the notion of ε-approximate probabilistic bisimulation is utilized that introduces the approximation level ε. We show that our approximation framework offers a certain error bound on each level of approximation. Then, we denote that the approximation framework is appropriate to be applied in decision-making process of self-adaptive systems where the models are relatively large. The results reveal that we can achieve up to 50% size reduction in the approximate model while maintaining the accuracy about 95%. In addition, we discuss about the trade-off between efficiency and accuracy of our approximation framework.
Article
Full-text available
In recent years, the increase of cyber threats has raised many concerns about security and privacy in the digital world. However, new attack methods are often limited to a few core techniques. In this paper, in order to detect new threat patterns, we use an attack graph structure to model unprecedented network traffic. This graph for the unknown attack is matched to a pre-known threat database, which contains attack graphs related to each known threat. The main challenge is to associate unknown traffics to a family of known threats. For this, we utilize random walks and pattern theorem. We utilize the pattern theorem and apply it to a set of proposed algorithms for detecting new generations of malicious traffics. Under the assumption of having a proper threat database, we argue that for each unknown threat, which belongs to a family of threats, it is possible to find at least one matching pattern with high matching rate and sensitivity.
Article
Full-text available
Condition monitoring can reduce machine breakdown losses, increase productivity and operation safety, and therefore deliver significant benefits to many industries. The emergence of wireless sensor networks (WSNs) with smart processing ability play an ever-growing role in online condition monitoring of machines. WSNs are cost-effective networking systems for machine condition monitoring. It avoids cable usage and eases system deployment in industry, which leads to significant savings. Powering the nodes is one of the major challenges for a true WSN system, especially when positioned at inaccessible or dangerous locations and in harsh environments. Promising energy harvesting technologies have attracted the attention of engineers because they convert microwatt or milliwatt level power from the environment to implement maintenance-free machine condition monitoring systems with WSNs. The motivation of this review is to investigate the energy sources, stimulate the application of energy harvesting based WSNs, and evaluate the improvement of energy harvesting systems for mechanical condition monitoring. This paper overviews the principles of a number of energy harvesting technologies applicable to industrial machines by investigating the power consumption of WSNs and the potential energy sources in mechanical systems. Many models or prototypes with different features are reviewed, especially in the mechanical field. Energy harvesting technologies are evaluated for further development according to the comparison of their advantages and disadvantages. Finally, a discussion of the challenges and potential future research of energy harvesting systems powering WSNs for machine condition monitoring is made.
Conference Paper
Full-text available
The heterogeneity in cyber-physical systems (CPS) and the diverse situations that they may face with, along with the environmental hazards raise the need to self-stabilization. The uncertain nature of CPS necessitates a probabilistic view for analyzing the system stabilization-time that is a highly critical metric in distributed/time-sensitive applications. Calculating the worst-case expected stabilization-time and possible improvements help to have safer designs of CPS applications. In this paper, a mutual exclusion algorithm based on PIF (Propagation of Information with Feedback) self-stabilizing algorithm is selected in synchronous environment as a case study. Using probabilistic analysis, we present a set of guidelines for utilizing this algorithm in time-sensitive applications. We have also utilized an approximation method for improving the scalability of our probabilistic analysis and did a set of experiments to show how this analysis could be used in the design of topologies with the goal of having an optimal worst-case expected stabilization-time. Our results show that using this approach, we can significantly improve the worst-case expected stabilization-time.
Article
Full-text available
Activity recognition systems are used in rehabilitation centres to monitor activity of daily living in order to assess daily functional status of elderly. A low-cost, non-invasive and continuous wearable activity monitoring system can be realised by one or multiple wearable sensor nodes to form a self-managing wireless medical body area network. There are several arising challenges essential to be dealt within developing wearable activity recognition systems, namely sensor node lifetime and detection accuracy. This paper investigates existing solutions which address the key opposing challenges. We propose a feedback controller algorithm to dynamically adapt sampling rate for maintaining the trade-off between the energy efficiency and accuracy. Number of samples and transmitted data packets is the main sources of energy consumption that impacts the system accuracy. To validate the accuracy of our proposed algorithm, a public wearable activity recognition dataset is constructed. The dataset is collected from 20 healthy subjects over 7 activity types excluding the transition states, using up to four accelerometer sensors connected with IEEE 802.15.4 enabled nodes in our setup. Our proposed feedback controller algorithm nearly doubles the activity recognition system lifetime. This, in turn improves the users' quality of experience by reducing the demand for battery replacements while the accuracy of detection is maintained at the same level.
Conference Paper
Full-text available
We introduce FACT, a probabilistic model checker that computes confidence intervals for the evaluated properties of Markov chains with unknown transition probabilities when observations of these transitions are available. FACT is unaffected by the unquantified estimation errors generated by the use of point probability estimates, a common practice that limits the applicability of quantitative verification. As such, FACT can prevent invalid decisions in the construction and analysis of systems, and extends the applicability of quantitative verification to domains in which unknown estimation errors are unacceptable.
Conference Paper
Full-text available
Solar irradiance prediction is a major issue in energy harvesting enabled WSNs. In this paper, we use Markov chains of increasing order to propose a new model-referred to as ASIM-for predicting solar irradiance patterns. Cornerstone of the proposed model is the determination of the state dependencies of the underlying Markov chains. The ASIM model is derived from a comprehensive solar radiation data set of four different locations around the globe. Our trace driven performance evaluation reveals that the ASIM model predicts the solar irradiance pattern very accurately-Normalized RMSE as low as 0.1-as the order of the underlying Markov model increases. We also present mechanism to reduce the complexity of Markov chain to make the model more practical in wireless sensor networks.
Article
Full-text available
Self-adaptive systems used in safety-critical and business-critical applications must continue to comply with strict non-functional requirements while evolving in order to adapt to changing workloads, environments, and goals. Runtime quantitative verification (RQV) has been proposed as an effective means of enhancing self-adaptive systems with this capability. However, RQV frequently fails to provide the fast response times and low computation overheads required by real-world self-adaptive systems. In this paper, we investigate how three techniques, namely caching, lookahead and nearly-optimal reconfiguration, and combinations thereof, can help address this limitation. Extensive experiments in a case study involving the RQV-driven self-adaptation of an unmanned underwater vehicle indicate that these techniques can lead to significant reductions in RQV response times and computation overheads.
Article
Full-text available
Energy harvesting from the surroundings is a promising solution to perpetually power-up wireless sensor communications. This paper presents a data-driven approach of finding optimal transmission policies for a solar-powered sensor node that attempts to maximize net bit rates by adapting its transmission parameters, power levels and modulation types, to the changes of channel fading and battery recharge. We formulate this problem as a discounted Markov decision process (MDP) framework, whereby the energy harvesting process is stochastically quantized into several representative solar states with distinct energy arrivals and is totally driven by historical data records at a sensor node. With the observed solar irradiance at each time epoch, a mixed strategy is developed to compute the belief information of the underlying solar states for the choice of transmission parameters. In addition, a theoretical analysis is conducted for a simple on-off policy, in which a predetermined transmission parameter is utilized whenever a sensor node is active. We prove that such an optimal policy has a threshold structure with respect to battery states and evaluate the performance of an energy harvesting node by analyzing the expected net bit rate. The design framework is exemplified with real solar data records, and the results are useful in characterizing the interplay that occurs between energy harvesting and expenditure under various system configurations. Computer simulations show that the proposed policies significantly outperform other schemes with or without the knowledge of short-term energy harvesting and channel fading patterns.
Article
Full-text available
Human activity increasingly relies on software being able to make self-adaptation decisions. The only way to achieve dependable software adaptation is to unite autonomic computing and mathematically based modeling and analysis techniques. Quantitative verification and model checking must also be used at runtime to predict and identify requirement violations, as well as to plan the adaptation steps necessary to prevent or recover from violations and obtain irrefutable proof the reconfigured software complies with its requirements. In developing a machine, software engineers must first derive a specification from the requirements and so must understand the relevant assumptions to be made about the environment in which the machine is expected to work. Domain assumptions play a fundamental role in building systems. Quantitative verification is a mathematically based technique for analyzing the correctness, performance, and reliability of systems exhibiting stochastic behavior.
Conference Paper
Full-text available
Quantitative verification techniques provide an effective means of computing performance and reliability properties for a wide range of systems. However, the computation required can be expensive, particularly if it has to be performed multiple times, for example to determine optimal system parameters. We present efficient incremental techniques for quantitative verification of Markov decision processes, which are able to re-use results from previous verification runs, based on a decomposition of the model into its strongly connected components (SCCs). We also show how this SCC-based approach can be further optimised to improve verification speed and how it can be combined with symbolic data structures to offer better scalability. We illustrate the effectiveness of the approach on a selection of large case studies.
Conference Paper
Full-text available
This paper describes a major new release of the PRISM probabilistic model checker, adding, in particular, quantitative verification of (priced) probabilistic timed automata. These model systems exhibiting probabilistic, nondeterministic and real-time characteristics. In many application domains, all three aspects are essential; this includes, for example, embedded controllers in automotive or avionic systems, wireless communication protocols such as Bluetooth or Zigbee, and randomised security protocols. PRISM, which is open-source, also contains several new components that are of independent use. These include: an extensible toolkit for building, verifying and refining abstractions of probabilistic models; an explicit-state probabilistic model checking library; a discrete-event simulation engine for statistical model checking; support for generation of optimal adversaries/strategies; and a benchmark suite.
Conference Paper
Full-text available
The goal of this roadmap paper is to summarize the state-of-the-art and to identify critical challenges for the systematic software engineering of self-adaptive systems. The paper is partitioned into four parts, one for each of the identified essential views of self-adaptation: modelling dimensions, requirements, engineering, and assurances. For each view, we present the state-of-the-art and the challenges that our community must address. This roadmap paper is a result of the Dagstuhl Seminar 08031 on “Software Engineering for Self-Adaptive Systems,” which took place in January 2008.
Article
Wireless sensor network (WSN) comprises a collection of sensor nodes employed to monitor and record the status of the physical environment and organize the gathered data at a central location. This paper presents a deep learning based distributed data mining (DDM) model to achieve energy efficiency and optimal load balancing at the fusion centre of WSN. The presented DMM model includes a recurrent neural network (RNN) based long short-term memory (LSTM) called RNN-LSTM, which divides the network into various layers and place them into the sensor nodes. The proposed model reduces the overhead at the fusion centre along with a reduction in the number of data transmission. The presented RNN-LSTM model is tested under a wide set of experimentation with varying number of hidden layer nodes and signalling intervals. At the same time, the amount of energy needed to transmit data by RNN-LSTM model is considerably lower than energy needed to transmit actual data. The simulation results indicated that the RNN-LSTM reduces the signalling overhead, average delay and maximizes the overall throughput compared to other methods. It is noted that under the signaling interval of 240ms, it can be shown that the RNN-LSTM achieves a minimum average delay of 190 ms whereas the OSPF and DNN models shows average delay of 230 ms and 230 ms respectively.
Article
The rapid expansion of distributed energy resources has led to increasingly complex systems with numerous power converters. Accurate converter loss prediction in large grids and microgrids is essential for financial and reliability evaluation. Existing system-level analysis focuses on distribution losses and oversimplifies converter losses by assuming fixed efficiency. However, converter losses are highly variable under different operating conditions. Moreover, commercially-available multi-domain simulation tools are too slow to be applied to system-level analysis. In order to provide computationally simple loss prediction under all operating conditions, the Rapid Loss Estimation equation is proposed. First, the real operating conditions of the converter are determined for the intended application. Then, accurate loss information is extracted from detailed converter behavior in multi-domain simulations. Finally, the Rapid Loss Estimation equation is obtained: a parametric equation which is fast enough for system-level simulation while capturing the converter's complexity at different operating conditions. A DC microgrid with three different converters, one each for solar generation, electric vehicle charging stations and battery storage, is considered to highlight the benefits of the proposed loss estimation tool.
Article
Most of the current self-adaptive systems (SASs) rely on static feedback loops such as the IBM’s MAPE-K loop for managing their adaptation process. Static loops do not allow SASs to react to runtime events such as changing adaptation requirements or MAPE-K elements’ faults. In order to address this issue, some solutions have emerged for manually or automatically perform changes on SASs’ feedback loops. However, from the software engineering perspective, most of the proposals cannot be reused or extended by other SASs. In this paper, we present HAFLoop (Highly Adaptive Feedback control Loop), a generic architectural proposal that aims at easing and fastening the design and implementation of adaptive feedback loops in modern SASs. Our solution enables both structural and parameter adaptation of the loop elements. Moreover, it provides a highly modular design that allows SASs’ owners to support a variety of feedback loop settings from centralized to fully decentralized. In this work, HAFLoop has been implemented as a framework for Java-based systems and evaluated in two emerging software application domains: self-driving vehicles and IoT networks. Results demonstrate that our proposal easies and accelerates the development of adaptive feedback loops as well as how it could help to address some of the most relevant challenges of self-driving vehicles and IoT applications. Concretely, HAFLoop has demonstrated to improve SASs’ feedback loops’ runtime availability and operation.
Chapter
Following a brief discussion of the critical behaviour of the standard self-avoiding walk, we introduce the continuous-time weakly self-avoiding walk (also called the lattice Edwards model). We derive the BFS-Dynkin isomorphism which provides a random walk representation for spin systems. We introduce an anti-commuting fermion field represented by differential 1-forms, and explain an important connection with supersymmetry. We prove the localisation theorem, and use it to derive a representation of the weakly self-avoiding walk in terms of a supersymmetric spin system. For the 4-dimensional weakly self-avoiding walk, the renormalisation group method developed in this book has been extended to analyse the supersymmetric spin system and thereby yield results concerning the critical behaviour of the weakly self-avoiding walk.
Chapter
Cyber-Physical Systems (CPS) are interconnected devices, reactive and dynamic to sensed external and internal triggers. The H2020 CERBERO EU Project is developing a design environment composed by modelling, deployment and verification tools for adaptive CPS. This paper focuses on its efficient support for run-time self-adaptivity.
Conference Paper
The validity of systems at run time depends on the features included in those systems operating as specified. However, when feature interactions occur, the specifications no longer reflect the state of the run-time system due to the conflict. While methods exist to detect feature interactions at design time, conflicts that cause features to fail may still arise when new detected feature interactions are considered unreachable, new features are added, or an exhaustive design-time detection approach is impractical due to computational costs. This paper introduces Thoosa, an approach for using models at run time to detect features that can fail due to n-way feature interactions at run time and thereby trigger mitigating adaptations and/or updates to the requirements. We illustrate our approach by applying Thoosa to an industry-based automotive braking system comprising multiple subsystems.
Chapter
Model checking is a computer-assisted method for the analysis of dynamical systems that can be modeled by state-transition systems. Drawing from research traditions in mathematical logic, programming languages, hardware design, and theoretical computer science, model checking is now widely used for the verification of hardware and software in industry. This chapter is an introduction and short survey of model checking. The chapter aims to motivate and link the individual chapters of the handbook, and to provide context for readers who are not familiar with model checking.
Book
This book provides engineers with focused treatment of the mathematics needed to understand probability, random variables, and stochastic processes, which are essential mathematical disciplines used in communications engineering. The author explains the basic concepts of these topics as plainly as possible so that people with no in-depth knowledge of these mathematical topics can better appreciate their applications in real problems. Applications examples are drawn from various areas of communications. If a reader is interested in understanding probability and stochastic processes that are specifically important for communications networks and systems, this book serves his/her need. • Narrows down probability, random variables, stochastic processes to an essential set of topics that communications systems engineers must understand; • Presents difficult proofs of theories so that the reader can gain working level understanding of the subject as applied to communications networks and systems; • Provides examples that the author encountered through his long experience in both teaching and industry; • Provides a comprehensive review of the elements of complex variables, linear algebra and set theory -- pre-requisites for understanding the main topics of the book.
Conference Paper
We present a new method for statistical verification of quantitative properties over a partially unknown system with actions, utilising a parameterised model (in this work, a parametric Markov decision process) and data collected from experiments performed on the underlying system. We obtain the confidence that the underlying system satisfies a given property, and show that the method uses data efficiently and thus is robust to the amount of data available. These characteristics are achieved by firstly exploiting parameter synthesis to establish a feasible set of parameters for which the underlying system will satisfy the property; secondly, by actively synthesising experiments to increase amount of information in the collected data that is relevant to the property; and finally propagating this information over the model parameters, obtaining a confidence that reflects our belief whether or not the system parameters lie in the feasible set, thereby solving the verification problem.
Article
This paper reports on a self-adaptive energy harvesting system, which is able to adapt its eigenfrequency to the operating conditions of power units. The power required for frequency tuning is delivered by the energy harvester itself. The tuning mechanism is based on a magnetic concept and incorporates a circular tuning magnet and a coupling magnet. In this manner, both coupling modes (attractive and repulsive) can be utilized for tuning the eigenfrequency of the energy harvester. The tuning range and its center frequency can be tailored to the application by careful design of the spring stiffness and the gap between tuning magnet and coupling magnet. Experimental results demonstrate that, in contrast to a conventional non-tunable vibration energy harvester, the net power can be significantly increased if a self-adaptive system is utilized, although additional power is required for regular adjustments of the eigenfrequency. The outcome confirms that active tuning is a real and practical option to extend the operational frequency range and to increase the net power of a conventional vibration energy harvester.
Article
Traffic accidents and congestion problems continue to worsen worldwide. Because of vast number of vehicles manufactured and sold every year transportation sector is significantly stressed, leading to more accidents and fatalities, and adverse environmental and economic impact. Efforts across the world for Smart Transportation Cyber Physical Systems (CPS) are aimed at addressing a range of problems including reducing traffic accidents, decreasing congestion, reducing fuel consumption, reducing time spent on traffic jams, and improve transportation safety. Thus, smart transportation CPS is expected to contribute a main role in the design and development of intelligent transportation systems. The advances in embedded systems, wireless communications and sensor networks provides the opportunities to bridge the physical components and processes with the cyber world that leading to a Cyber Physical Systems (CPS). Feedback for control through wireless communication in transportation CPS is one of the major components for both safety and infotainment applications where vehicles exchange information using vehicle-to-vehicle (V2V) through vehicular ad hoc network (VANET) and/or vehicle-to-roadside (V2R) communications. For wireless communication IEEE has 802.11p standard for Dedicated Short Range Communication (DSRC) for Wireless Access for Vehicular Environment (WAVE). In this paper, we present how different parameters (e.g., sensing time, association time, number for vehicles, relative speed of vehicles, overlap transmission range, etc.) affect communication in smart transportation CPS. Furthermore, we also present driving components, current trends, challenges, and future directions for transportation CPS.
Article
Thousands of industrial gas leaks occur every year, with many leading to injuries, deaths, equipment damage, and a disastrous environmental effect. There have been many attempts at solving this problem, but with limited success. This paper proposes a wireless gas leak detection and localization solution. With a monitoring network of 20 wireless devices covering 200 m2, 60 propane releases are performed. The detection and localization algorithms proposed here are applied to the collected concentration data, and the methodology is evaluated. A detection rate of 91% is achieved, with seven false alarms recorded over 3 days, and an average detection delay of 108 s. The localization results show an accuracy of 5 m. Recommendations for future explosive gas sensor design are then presented.
Article
Imagine that you are standing at an intersection in the centre of a large city whose streets are laid out in a square grid. You choose a street at random and begin walking away from your starting point, and at each intersection you reach you choose to continue straight ahead or to turn left or right.
Article
When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric. Le choix de métrique de probabilité est une décision très importante lorsqu'on étudie la convergence des mesures. Nous vous fournissons avec un sommaire de plusieurs métriques/distances de probabilité couramment utilisées par des statisticiens(nes) at par des probabilistes, ainsi que certains nouveaux résultats qui se rapportent à leurs bornes. Avoir connaissance d'autres métriques peut vous foumir avec un moyen de dériver des bornes pour une autre métrique dans un problème appliqué. Le fait de prendre en considération plusieurs métriques vous permettra d'approcher des problèmes d'une manière diffrente. Ainsi, nous vous démontrons que les taux de convergence peuvent dépendre de façon importante sur votre choix de métrique. Il est donc important de tout considérer lorsqu'on doit choisir une métrique.
Adaptation Timing in Self-Adaptive Systems
  • Moreno
Gabriel A. Moreno, Adaptation Timing in Self-Adaptive Systems, PhD Dissertations 925 (2017).
A Random Walk-Based Pattern-Matching Simulator for Verification of Incomplete Markov Models
  • Mehran Alidoost
Mehran Alidoost Nia, "A Random Walk-Based Pattern-Matching Simulator for Verification of Incomplete Markov Models", 2019, https://github.com/alidoost nia/Self-adaptive-pattern-matching/, last updated: January 2021.
Solar radiation resource information
  • N R E Laboratory
N.R.E. Laboratory, Solar radiation resource information, [Online]. Available: http://www.nrel.gov/rredc/, July 2019.
Self-Avoiding Walk and Supersymmetry”, Introduction to a Renormalisation Group Method
  • Roland Bauerschmidt
  • C David
  • Gordon Brydges
  • Slade
Runtime Quantitative Verification of Self-Adaptive Systems
  • Simos Gerasimou
Simos Gerasimou, Runtime Quantitative Verification of Self-Adaptive Systems, University of York, York, UK, 2016. Ph.D. dissertation.
ASIM: solar energy availability model for wireless sensor networks
  • Adnan Muhammad Faizan Ghuman
  • Iqbal Hassaan Khaliq
  • Marios Qureshi
  • Lestas
Muhammad Faizan Ghuman, Adnan Iqbal Hassaan Khaliq Qureshi, Marios Lestas, ASIM: solar energy availability model for wireless sensor networks, in: Proceedings of the 3rd International Workshop on Energy Harvesting & Energy Neutral Sensing Systems (ENSsys '15, ACM, 2015, pp. 21-26.
Automated experiment design for data-efficient verification of parametric markov decision processes," quantitative evaluation of systems (QEST) 2017
  • Elizabeth Polgreen
  • B Viraj
  • Sofie Wijesuriya
  • Alessandro Haesaert
  • Abate
Elizabeth Polgreen, Viraj B. Wijesuriya, Sofie Haesaert, Alessandro Abate, Automated experiment design for data-efficient verification of parametric markov decision processes," quantitative evaluation of systems (QEST) 2017, Lecture Notes in Comput. Sci. 10503 (2017) 259-274.
Probabilistic approximation of runtime quantitative verification in self-adaptive systems
  • Mehdi Mehran Alidoost Nia
  • Fathiyeh Kargahi
  • Faghih
Mehran Alidoost Nia, Mehdi Kargahi, Fathiyeh Faghih, Probabilistic approximation of runtime quantitative verification in self-adaptive systems, Microprocess. Microsyst. 72 (2020), 102943.
Detecting new generations of threats using attribute-based attack graphs
  • Behnam Mehran Alidoost Nia
  • Mehdi Bahrak
  • Benjamin Kargahi
  • Fabian
Mehran Alidoost Nia, Behnam Bahrak, Mehdi Kargahi, Benjamin Fabian, Detecting new generations of threats using attribute-based attack graphs, in: IET Information Security, 13, IET, 2019, pp. 293-303, 7.
Self-adaptation with imperfect monitoring in solar energy harvesting systems
  • Mehdi Mehran Alidoost Nia
  • Alessandro Kargahi
  • Abate
Mehran Alidoost Nia, Mehdi Kargahi, Alessandro Abate, Self-adaptation with imperfect monitoring in solar energy harvesting systems, in: 2020 CSI/CPSSI International Symposium on Real-Time and Embedded Systems and Technologies (RTEST), Tehran, Iran, IEEE, 2020, pp. 1-8.
Hardware/software self-adaptation in CPS: the CERBERO project approach
  • F Palumbo
F. Palumbo, et al., Hardware/software self-adaptation in CPS: the CERBERO project approach, in: Embedded Computer Systems: Architectures, Modeling, and Simulation, SAMOS, 11733, Springer, Cham, 2019. Lecture Notes in Computer Science2019.