[Show abstract][Hide abstract] ABSTRACT: Synchronous dataflow graphs (SDFGs) are widely used to represent DSP algorithms and streaming media applications. This paper presents several methods for binding and scheduling SDFGs on a multiprocessor platform. Exploring the state-space generated by a self-timed execution (STE) of an SDFG, we present an exact method for static rate-optimal scheduling of SDFGs via implicit retiming and unfolding. By modeling a constraint as an extra enabling condition for the STE, we get a constrained STE which implies a schedule under the constraint. We present a general framework for scheduling SDFGs under constraints on the number of processors, buffer sizes, auto-concurrency, or combinations of them. Exploring the state-space generated by the constrained STE, we can check whether a retiming, which leads to a rate-optimal schedule under the processor (or memory) constraint, exists. Combining this with a binary search strategy, we present heuristic methods to find a proper retiming and a static scheduling that schedules the retimed SDFG with optimal rate and with as few processors (or as little storage space) as possible. None of the methods explicitly converts an SDFG to its equivalent homogenous SDFG, the size of which may be tremendously larger than the original SDFG. We perform experiments on several models of real applications and hundreds of synthetic SDFGs. The results show that the exact method outperforms existing methods significantly; our heuristics reduce the resources used and are computationally efficient.
Full-text · Article · Jan 2016 · IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
[Show abstract][Hide abstract] ABSTRACT: General purpose platforms are characterized by unpredictable timing behavior. Real-time schedules of tasks on general purpose platforms need to be robust against variations in task execution times. We defi�ne robustness in terms of the expected number of tasks that miss deadlines. We present an
iterative robust scheduler that produces robust multiprocessor schedules of directed acyclic graphs with a low expected number of tasks that miss their deadlines. We experimentally show that this robust scheduler produces signi�ficantly more robust schedules in comparison to a scheduler using
nominal execution times on both real world and synthetic test cases.
[Show abstract][Hide abstract] ABSTRACT: Development of high-level supervisory controllers is an important challenge in the design of high-tech systems. It has become a significant issue due to increased complexity, combined with demands for verified quality, time to market, ease of development, and integration of new functionality. To deal with these challenges, model-based engineering approaches are suggested as a cost-effective way to support easy adaptation, validation, synthesis, and verification of controllers. This paper presents an industrial case study on modular design of a supervisory controller for wafer logistics in lithography machines. The uncontrolled system and control requirements are modeled independently in a modular way, using small, loosely coupled and minimally restrictive extended finite automata. The multiparty synchronization mechanism that is part of the specification formalism provides clear advantages in terms of modularity, traceability, and adaptability of the model. We show that being able to refer to variables and states of automata in guard expressions and state-based requirements, enabled by the use of extended finite automata, provides concise models. Additionally, we show how modular synthesis allows construction of local supervisors that ensure safety of parts of the system, since monolithic synthesis is not feasible for our industrial case.
[Show abstract][Hide abstract] ABSTRACT: CPS play an important role in the modern high-tech industry. Designing such systems is a challenging task due to the multi-disciplinary nature of these systems, and the range of abstraction levels involved. To facilitate hands-on experience with such systems, we develop a cyber-physical platform that aids in research and education on CPS.
This paper describes this platform, which contains all typical CPS components. The platform is used in various research and education projects for bachelor, master, and PhD students. We discuss the platform and a number of projects and the educational opportunities they provide.
[Show description][Hide description] DESCRIPTION: Technical Report (2011) [ISCAS-SKLCS-11-53]: Synchronous dataflow graphs (SDFGs) are widely used to model multi-rate digital signal processing (DSP) algorithms. A lower iteration period of such a model implies a faster execution of a DSP algorithm. Retiming is a simple but efficient graph transformation technique for performance optimization, which can decrease the iteration period without affecting functionality. In this paper, we deal with the iteration period minimization problem — retiming an SDFG to achieve the smallest possible iteration period. We present a heuristic method that works directly on SDFGs, without converting them to their equivalent homogeneous SDFGs. It analyzes the state-space generated by a self-timed execution of the SDFG to obtain a near-optimal retiming. Our experimental results show that in 85% of the test cases that allow actors to fire autoconcurrently, our method gets reduced iteration periods close to the optimal ones, while being ten times faster than the state-of-the-art exact method; in all the test cases in which auto-concurrent firing
of actors is excluded, our method gets reduced iteration periods almost the same as the optimal ones, while being 100 times faster than the exact method. Combining parts of the exact method with our novel method, we present an improved algorithm, whose execution time is further reduced by 22%.
[Show abstract][Hide abstract] ABSTRACT: A Large Scale Printer (LSP) is a Cyber Physical System (CPS) printing thousands of sheets per day with high quality. The print requests arrive at run-time requiring online scheduling. We capture the LSP scheduling problem as online scheduling of re-entrant flowshops with sequence dependent setup times and relative due dates with makespan minimization as the scheduling criterion. Exhaustive approaches like Mixed Integer Programming can be used, but they are compute intensive and not suited for online use. We present a novel heuristic for scheduling of LSPs that on average requires 0.3 seconds per sheet to find schedules for industrial test cases. We compare the schedules to lower bounds, to schedules generated by the current scheduler and schedules generated by a modified version of the classical NEH (MNEH) heuristic , . On average, the proposed heuristic generates schedules that are 40% shorter than the current scheduler, have an average difference of 25% compared to the estimated lower bounds and generates schedules with less than 67% of the makespan of schedules generated by the MNEH heuristic.
[Show abstract][Hide abstract] ABSTRACT: Wireless Sensor Networks (WSNs) are commonly deployed in dynamic environments where events, such as moving sensor nodes and changing external interference, impact the performance, or Quality of Service (QoS), of the network. QoS isexpressed by the values of multiple, possibly conflicting, network quality metrics, such as network lifetime and maximum latency of communicating a packet to the sink. Sufficient QoS should be provided by the WSN to ensure that the end-user can successfully use the WSN to perform its application. We propose a distributed reconfiguration approach that actively maintains a sufficient level of QoS at runtime for a heterogeneous WSN in a dynamic environment. Every node uses a feedback control strategy to resolve any difference between the current and required QoS of the network by adapting controllable parameters of the protocol stack. Example parameters are the transmission power and maximum number of packet retransmissions. Nodes collaborate such that, with the combined adaptations, the required network QoS is achieved. The behavior of the reconfiguration approach and the tradeoffs involved are analyzed in detail. With the use of simulations and experiments with actual deployments, we show that our approach allows a better optimization of QoS objectives while constraints are met; for example, it achieves the same packet loss with a significantly longer lifetime, compared to current (re-)configuration approaches.
No preview · Article · Mar 2015 · ACM Transactions on Sensor Networks
[Show abstract][Hide abstract] ABSTRACT: Past years have seen intense research on reliability techniques for error detection recovery at various levels ranging from circuit level up to architectural level or even software level. In such scenarios, affordable techniques for error correction usually imply a timing penalty, e.g., check-pointing usually requires to repeat some part of the computation, which imposes a higher computation time. This can be problematic for real-time embedded control applications especially in the presence of intermittent hardware faults, for which delays due to re-computation are repeatedly encountered with high repetition rate. In this work, we investigate a setting where the control loops are executed on an unreliable embedded platform that may suffer from such intermittent faults. First, we characterize the impact of intermittent faults in the hardware by using an intermittent bit-flip fault model and RTL level error effect simulation. Subsequently, we look at novel fault-tolerant control algorithms that guarantee stability of the loops even in presence of repeating timing errors due to the error recovery of the unreliable hardware.
[Show abstract][Hide abstract] ABSTRACT: Object detection and tracking is one of the most important components in computer vision applications. To carefully evaluate the performance of detection and tracking algorithms, it is important to develop benchmark data sets. One of the most tedious and error-prone aspects when developing benchmarks, is the generation of the ground truth. This paper presents FAST-GT (FAst Semi-automatic Tool for Ground Truth generation), a new generic framework for the semiautomatic generation of ground truths. FAST-GT reduces the need for manual intervention thus speeding-up the ground-truthing process.
[Show abstract][Hide abstract] ABSTRACT: Crucial to the success of Body Area Sensor Networks is the flexibility with which stakeholders can share, extend and adapt the system with respect to sensors, data and func-tionality. The first step is to develop an interoperable plat-form with explicit interfaces, which takes care of common management tasks. Beyond that, interoperability is defined by semantics. This paper presents the analysis, design, im-plementation and evaluation of a semantic layer within an existing BASN platform for the purpose of improving the semantic interoperability among sensor networks and appli-cations. We adopt an ontology-based approach but rather than having a single overall ontology, we find that using clear semantic domains and mappings between them improves composability and reduces interoperability problems. We discuss the design choices and a reference implementation on an Android phone and actual sensor devices. We show by a qualitative evaluation that this semantic interoperabil-ity indeed provides significant improvements in flexibility.
[Show abstract][Hide abstract] ABSTRACT: Patient observations in health care, subjective surveys in social research or dyke sensor data in water management are all examples of measurements. Several ontologies already exist to express measurements, W3C's SSN ontology being a prominent example. However, these ontologies address quantities and properties as being equal, and ignore the foundation required to establish comparability between sensor data. Moreover, a measure of an observation in itself is almost always inconclusive without the context in which the measure was obtained. ContoExam addresses these aspects, providing for a unifying capability for context-aware expressions of observations about quantities and properties alike, by aligning them to ontological foundations, and by binding observations inextricably with their context.
[Show abstract][Hide abstract] ABSTRACT: Exploration of design alternatives and estimation of their key performance metrics such as latency and energy consumption is essential for making the proper design decisions in the early phases of system development. Often, high-level models of the dynamic behavior of the system are used for the analysis of design alternatives. Our work presents a blueprint for building efficient and re-usable models for this purpose. It builds on the well-known Y-chart pattern in that it gives more structure for the proper modeling of interaction on shared resources that plays a prominent role in software-intensive embedded systems. We show how the blueprint can be used to model a small yet illustrative example system with the Uppaal tool, and with the Java general-purpose programming language, and reflect on their respective strengths and weaknesses. The Java-based approach has resulted in a very flexible and fast discrete-event simulator with many re-usable components. It currently is used by TNO-ESI and Océ-Technologies B.V. for early model-based performance analysis that supports the design process for professional printing systems.
No preview · Article · Aug 2014 · International Journal on Software Tools for Technology Transfer
[Show abstract][Hide abstract] ABSTRACT: Tasks executing on general purpose multiprocessor platforms exhibit variations in their execution times. As such, there is a need to explicitly consider robustness, i.e., tolerance to these fluctuations. This work aims to quantify the robustness of schedules of directed acyclic graphs (DAGs) on multiprocessors by defining probabilistic robustness metrics and to present a new approach to perform robustness analysis to obtain these metrics. Stochastic execution times of tasks are used to compute completion time distributions which are then used to compute the metrics. To overcome the difficulties involved with the max operation on distributions, a new curve fitting approach is presented using which we can derive a distribution from a combination of analytical and limited simulation based results. The approach has been validated on schedules of time-critical applications in ASML wafer scanners.
[Show abstract][Hide abstract] ABSTRACT: Node mobility is a key feature of using Wireless Sensor Networks (WSNs) in many sensory applications, such as healthcare. The Medium Access Control (MAC) protocol should properly support the mobility in the network. In particular, mobility is complicated for contention-free protocols like Time Division Multiple Access (TDMA). An efficient access to the shared medium is scheduled based on the node's local neighborhood. This neighborhood may vary over time due to node movement or other dynamics. In scenarios including body-area networking, for instance, some clusters of nodes move together, creating further challenges but also opportunities. This article presents a MAC protocol, MCMAC, that provides efficient support for cluster mobility in TDMA-based MAC protocols in WSNs. The proposed protocol exploits a hybrid contention-free and contention-based communication approach to support cluster mobility. This relieves the protocol from rescheduling demand due to frequent node movements. Moreover, we propose a listening scheduling mechanism to avoid idle listening to mobile nodes that leads to a considerable energy saving for sensor nodes. The protocol is validated by performing several experiments in a real-world large-scale deployment including several mobile clusters. The protocol is also evaluated by extensive simulation of networks with various scales and configurations.
No preview · Article · Jun 2014 · ACM Transactions on Sensor Networks
[Show abstract][Hide abstract] ABSTRACT: Synchronous dataflow graphs (SDFGs) are widely used to model digital signal processing (DSP) and streaming media applications. In this paper, we use retiming to optimize SDFGs to achieve a high throughput with low storage requirement. Using a memory constraint as an additional enabling condition, we define a memory constrained self-timed execution of an SDFG. Exploring the state-space generated by the execution, we can check whether a retiming exists that leads to a rate-optimal schedule under the memory constraint. Combining this with a binary search strategy, we present a heuristic method to find a proper retiming and a static scheduling which schedules the retimed SDFG with optimal rate (i.e., maximal throughput) and with as little storage space as possible. Our experiments are carried out on hundreds of synthetic SDFGs and several models of real applications. Differential synthetic graph results and real application results show that, in 79% of the tested models, our method leads to a retimed SDFG whose rate-optimal schedule requires less storage space than the proven minimal storage requirement of the original graph, and in 20% of the cases, the returned storage requirements equal the minimal ones. The average improvement is about 7.3%. The results also show that our method is computationally efficient.
[Show abstract][Hide abstract] ABSTRACT: The timed dataflow model of computation is a useful performance analysis tool for Electronic System Level Design automation and embedded software synthesis. It is used to model systems, including platform mapping and resource scheduling, of components communicating and synchronizing in regular patterns. Its determinism gives it strong analysability properties and makes it less subject to state-space explosion problems. Because of its monotonic temporal behaviour it can provide hard real-time guarantees on throughput and latency. It is expressive enough to cover a fairly large class of applications and platforms. The trend however, in both embedded applications and their platforms is to become more dynamic, reaching the limits of what the model can express and analyse with tight performance guarantees. Scenario-aware dataflow (SADF) is an extension that allows more dynamism to be expressed, introducing a controlled amount of non-determinism into the model to represent different scenarios of behaviour. The combination of a relatively infrequent switching between scenarios and still deterministic dataflow behaviour within scenarios stretches the expressiveness of the model while keeping
[Show abstract][Hide abstract] ABSTRACT: Gossip-based Wireless Sensor Networks (GWSNs) are complex systems of inherently random nature. Planning and designing GWSNs requires a fast and adequately accurate mechanism to estimate system performance. As a first contribution, we propose a performance analysis technique that simulates the gossip-based propagation of each single piece of data in isolation. This technique applies to GWSNs in which the dissemination of data from a specific sensor does not depend on dissemination of data generated by other sensors. We model the dissemination of a piece of data with a Stochastic-Variable Graph Model (SVGM). A SVGM is a weighted-graph abstraction in which the edges represent stochastic variables that model propagation delays between neighboring nodes. Latency and reliability performance properties are obtained efficiently through a stochastic shortest-path analysis on the SVGM using Monte Carlo (MC) simulation. The method is accurate and fast, applicable for both partial and complete system analysis. It outperforms traditional discrete-event simulation. As a second contribution, we propose a centrality-based stratification method that combines structural network analysis and MC partial simulation, to further increase efficiency of the system-level analysis while maintaining adequate accuracy. We analyzed the proposed performance evaluation techniques through an extensive set of experiments, using a real deployment and simulations at different levels of abstraction.
[Show abstract][Hide abstract] ABSTRACT: Sensor nodes in many Wireless Body Area Network (WBAN) architectures are supposed to deliver sensed data to a gateway node on the body. To satisfy the data delivery requirements, the network needs to adapt itself to the changes in connection status of the body nodes to the gateway. As a prerequisite, Link Quality Estimation (LQE) needs to be done to detect the connection status of the nodes. The quality of links in WBANs is highly time-varying. The LQE technique should be agile to react fast to such link quality dynamics while avoiding frequent fluctuations to reduce the network adaptation overhead. In this paper, we present an empirical study on using different LQE methods for detecting the connection status of body nodes to the gateway in WBANs. A set of experiments using 16 wireless motes deployed on a body are performed to log the behavior of the wireless links. We explore the trade-offs made by each LQE method in terms of agility, stability, and reliability in detecting connection changes by analyzing the experimental data. Moreover, different LQE methods are used in an adaptive multi-hop WBAN mechanism, as a case study, and their impact on the Quality-of-Services (QoS) are investigated.
[Show abstract][Hide abstract] ABSTRACT: This paper provides an overview of the architecture for self-organizing, co-operative and robust Building Automation Systems (BAS) proposed by the EC funded FP7 SCUBA1 project. We describe the current situation in monitoring and control systems and outline the typical stakeholders involved in the case of building automation systems. We derive seven typical use cases which will be demonstrated and evaluated on pilot sites. From these use cases the project designed an architecture relying on six main modules that realize the design, commissioning and operation of self-organizing, co-operative, robust BAS.
[Show abstract][Hide abstract] ABSTRACT: There is a strong push towards smart buildings that aim to achieve comfort, safety and energy efficiency, through building automation systems (BAS) that incorporate multiple subsystems such as heating and air-conditioning, lighting, access control etc. The design, commissioning and operation of BAS is already challenging when handling an individual subsystem; however when introducing co-operation between systems the complexity increases dramatically. Balancing the contradictory requirements of comfort, safety and energy efficiency and coping with the dynamics of constantly changing environmental conditions, usage patterns, user needs etc. is a demanding task. This paper outlines an approach to the systematic engineering of cooperating, adaptive building automation systems, which aims to formalize the engineering approach in the form of an integrated tool chain that supports the building stakeholders to produce site-specific robust and reliable building automation.