[Show abstract][Hide abstract] ABSTRACT: Crucial to the success of Body Area Sensor Networks is the flexibility with which stakeholders can share, extend and adapt the system with respect to sensors, data and func-tionality. The first step is to develop an interoperable plat-form with explicit interfaces, which takes care of common management tasks. Beyond that, interoperability is defined by semantics. This paper presents the analysis, design, im-plementation and evaluation of a semantic layer within an existing BASN platform for the purpose of improving the semantic interoperability among sensor networks and appli-cations. We adopt an ontology-based approach but rather than having a single overall ontology, we find that using clear semantic domains and mappings between them improves composability and reduces interoperability problems. We discuss the design choices and a reference implementation on an Android phone and actual sensor devices. We show by a qualitative evaluation that this semantic interoperabil-ity indeed provides significant improvements in flexibility.
9th International Conference on Body Area Networks (BodyNets 2014), London, Great Britain; 10/2014
[Show abstract][Hide abstract] ABSTRACT: Patient observations in health care, subjective surveys in social research or dyke sensor data in water management are all examples of measurements. Several ontologies already exist to express measurements, W3C's SSN ontology being a prominent example. However, these ontologies address quantities and properties as being equal, and ignore the foundation required to establish comparability between sensor data. Moreover, a measure of an observation in itself is almost always inconclusive without the context in which the measure was obtained. ContoExam addresses these aspects, providing for a unifying capability for context-aware expressions of observations about quantities and properties alike, by aligning them to ontological foundations, and by binding observations inextricably with their context.
Formal Ontologies in Information Systems 2014, Rio de Janeiro, Brasil; 09/2014
[Show abstract][Hide abstract] ABSTRACT: Synchronous dataflow graphs (SDFGs) are widely used to model digital signal processing (DSP) and streaming media applications. In this paper, we use retiming to optimize SDFGs to achieve a high throughput with low storage requirement. Using a memory constraint as an additional enabling condition, we define a memory constrained self-timed execution of an SDFG. Exploring the state-space generated by the execution, we can check whether a retiming exists that leads to a rate-optimal schedule under the memory constraint. Combining this with a binary search strategy, we present a heuristic method to find a proper retiming and a static scheduling which schedules the retimed SDFG with optimal rate (i.e., maximal throughput) and with as little storage space as possible. Our experiments are carried out on hundreds of synthetic SDFGs and several models of real applications. Differential synthetic graph results and real application results show that, in 79% of the tested models, our method leads to a retimed SDFG whose rate-optimal schedule requires less storage space than the proven minimal storage requirement of the original graph, and in 20% of the cases, the returned storage requirements equal the minimal ones. The average improvement is about 7.3%. The results also show that our method is computationally efficient.
17th Design, Automation and Test in Europe, DATE2014, Dresden, Germany; 03/2014
[Show abstract][Hide abstract] ABSTRACT: Gossip-based Wireless Sensor Networks (GWSNs) are complex systems of inherently random nature. Planning and designing GWSNs requires a fast and adequately accurate mechanism to estimate system performance. As a first contribution, we propose a performance analysis technique that simulates the gossip-based propagation of each single piece of data in isolation. This technique applies to GWSNs in which the dissemination of data from a specific sensor does not depend on dissemination of data generated by other sensors. We model the dissemination of a piece of data with a Stochastic-Variable Graph Model (SVGM). A SVGM is a weighted-graph abstraction in which the edges represent stochastic variables that model propagation delays between neighboring nodes. Latency and reliability performance properties are obtained efficiently through a stochastic shortest-path analysis on the SVGM using Monte Carlo (MC) simulation. The method is accurate and fast, applicable for both partial and complete system analysis. It outperforms traditional discrete-event simulation. As a second contribution, we propose a centrality-based stratification method that combines structural network analysis and MC partial simulation, to further increase efficiency of the system-level analysis while maintaining adequate accuracy. We analyzed the proposed performance evaluation techniques through an extensive set of experiments, using a real deployment and simulations at different levels of abstraction.
[Show abstract][Hide abstract] ABSTRACT: Sensor nodes in many Wireless Body Area Network (WBAN) architectures are supposed to deliver sensed data to a gateway node on the body. To satisfy the data delivery requirements, the network needs to adapt itself to the changes in connection status of the body nodes to the gateway. As a prerequisite, Link Quality Estimation (LQE) needs to be done to detect the connection status of the nodes. The quality of links in WBANs is highly time-varying. The LQE technique should be agile to react fast to such link quality dynamics while avoiding frequent fluctuations to reduce the network adaptation overhead. In this paper, we present an empirical study on using different LQE methods for detecting the connection status of body nodes to the gateway in WBANs. A set of experiments using 16 wireless motes deployed on a body are performed to log the behavior of the wireless links. We explore the trade-offs made by each LQE method in terms of agility, stability, and reliability in detecting connection changes by analyzing the experimental data. Moreover, different LQE methods are used in an adaptive multi-hop WBAN mechanism, as a case study, and their impact on the Quality-of-Services (QoS) are investigated.
Proceedings of the 16th ACM international conference on Modeling, analysis & simulation of wireless and mobile systems; 11/2013
[Show abstract][Hide abstract] ABSTRACT: Synchronous dataflow graphs (SDFGs) are used extensively to model streaming applications. An SDFG can be extended with scheduling decisions, allowing SDFG analysis to obtain properties, such as throughput or buffer sizes for the scheduled graphs. Analysis times depend strongly on the size of the SDFG. SDFGs can be statically scheduled using static-order schedules. The only generally applicable technique to model a static-order schedule in an SDFG is to convert it to a homogeneous SDFG (HSDFG). This may lead to an exponential increase in the size of the graph and to suboptimal analysis results (e.g., for buffer sizes in multiprocessors). We present techniques to model two types of static-order schedules, i.e., periodic schedules and periodic single appearance schedules, directly in an SDFG. Experiments show that both techniques produce more compact graphs compared to the technique that relies on a conversion to an HSDFG. This results in reduced analysis times for performance properties and tighter resource requirements.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 10/2013; 32(10):1495-1508. · 1.09 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Many combinatorial optimization problems in the embedded systems and design automation domains involve decision making in multidimensional spaces. The multidimensional multiple-choice knapsack problem (MMKP) is among the most challenging of the encountered optimization problems. MMKP problem instances appear for example in chip multiprocessor runtime resource management and in global routing of wiring in circuits. Chip multiprocessor resource management requires solving MMKP under real-time constraints, whereas global routing requires scalability of the solution approach to extremely large MMKP instances. This article presents a novel MMKP heuristic, CPH (for Compositional Pareto-algebraic Heuristic), which is a parameterized compositional heuristic based on the principles of Pareto algebra. Compositionality allows incremental computation of solutions. The parameterization allows tuning of the heuristic to the problem at hand. These aspects make CPH a very versatile heuristic. When tuning CPH for computation time, MMKP instances can be solved in real time with better results than the fastest MMKP heuristic so far. When tuning CPH for solution quality, it finds several new solutions for standard benchmarks that are not found by any existing heuristic. CPH furthermore scales to extremely large problem instances. We illustrate and evaluate the use of CPH in both chip multiprocessor resource management and in global routing.
ACM Transactions on Design Automation of Electronic Systems (TODAES). 10/2013; 18(4).
[Show abstract][Hide abstract] ABSTRACT: Latest trends in embedded platform architectures show a steady shift from high frequency single core platforms to lower-frequency but highly-parallel execution platforms. Scheduling applications with stringent latency requirements on such multiprocessor platforms is challenging. Our work is motivated by the scheduling challenges faced by ASML, the world's leading provider of wafer scanners. A wafer scanner is a complex cyber-physical system that manipulates silicon wafers with extreme accuracy at high throughput. Typical control applications of the wafer scanner consist of thousands of precedence-constrained tasks with latency requirements. Machines are customized so that precise characteristics of the control applications to be scheduled and the execution platform are only known during machine start-up. This results in large-scale scheduling problems that need to be solved during start-up of the machine under a strict timing constraint on the schedule delivery time. This paper introduces a fast and scalable static-order scheduling approach for applications with stringent latency requirements and a fixed binding on multiprocessor platforms. It uses a heuristic that makes scheduling decisions based on a new metric to find feasible schedules that meet timing requirements as quickly as possible and it is shown to be scalable to very large task graphs. The computation of this metric exploits the binding information of the application. The approach will be incorporated into the ASML's latest generation of wafer scanners.
Proceedings of the 2013 Euromicro Conference on Digital System Design; 09/2013
[Show abstract][Hide abstract] ABSTRACT: Much effort has been spent on the optimization of sensor networks, mainly concerning their performance and power efficiency. Furthermore, open communication protocols for the exchange of sensor data have been developed and widely adopted, making sensor data widely available for software applications. However, less attention has been given to the interoperability of sensor networks and sensor network applications at a semantic level. This hinders the reuse of sensor networks in different applications and the evolution of existing sensor networks and their applications. The main contribution of this paper is an ontology-based approach and architecture to address this problem. We developed an ontology that covers concepts regarding examinations as well as measurements, including the circumstances in which the examination and measurement have been performed. The underlying architecture secures a loose coupling at the semantic level to facilitate reuse and evolution. The ontology has the potential of supporting not only correct interpretation of sensor data, but also ensuring its appropriate use in accordance with the purpose of a given sensor network application. The ontology has been specialized and applied in a remote patient monitoring example, demonstrating the aforementioned potential in the e-health domain.
Computational Intelligence in Healthcare and e-health (CICARE), 2013 IEEE SSCI Symposium on, Singapore; 04/2013
[Show abstract][Hide abstract] ABSTRACT: Dynamic behavior of streaming applications can be effectively modeled by scenario-aware dataflow graphs (SADFs). Many streaming applications must provide timing guarantees (e.g., throughput) to assure their quality-of-service. For instance, a video decoder which is running on a mobile device is expected to deliver a video stream with a specific frame rate. Moreover, the energy consumption of such applications on handheld devices should be as low as possible. This paper proposes a technique to select a suitable multiprocessor DVFS point for each mode (scenario) of a dynamic application described by an SADF. The technique assures strict timing guarantees while minimizing energy consumption. The technique is evaluated by applying it to several streaming applications. It solves the problem faster than the state of the art technique for dataflow graphs. Moreover, the DVFS controller devised using the proposed technique is more compact and reduces energy consumption compared to the controller devised using the counterpart technique.
Proceedings of the 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS); 04/2013
[Show abstract][Hide abstract] ABSTRACT: There is a strong push towards smart buildings that aim to achieve comfort, safety and energy efficiency, through building automation systems (BAS) that incorporate multiple subsystems such as heating and air-conditioning, lighting, access control etc. The design, commissioning and operation of BAS is already challenging when handling an individual subsystem; however when introducing co-operation between systems the complexity increases dramatically. Balancing the contradictory requirements of comfort, safety and energy efficiency and coping with the dynamics of constantly changing environmental conditions, usage patterns, user needs etc. is a demanding task. This paper outlines an approach to the systematic engineering of cooperating, adaptive building automation systems, which aims to formalize the engineering approach in the form of an integrated tool chain that supports the building stakeholders to produce site-specific robust and reliable building automation.
Industrial Electronics Society, IECON 2013 - 39th Annual Conference of the IEEE; 01/2013
[Show abstract][Hide abstract] ABSTRACT: This paper presents a collaborative procedure for multiobjective global routing. Our procedure takes multiple global routing solutions, which are generated independently (e.g., by one router that runs in different modes concurrently or by different routers running in parallel), as input. It then performs multiobjective optimization based on Pareto algebra and quickly generates multiple global routing solutions with a tradeoff between the considered objectives. The user can control the number of generated solutions and the degree of exploring the tradeoff between them by constraining the maximum allowable degradation in each objective. This paper then considers the following three multiobjective case studies: 1) minimization of interconnect power and wirelength; 2) minimization of routing congestion and wirelength; and 3) minimization of wirelength with respect to the (finite-capacity) routing resources. The maximum allowable degradation in wirelength is specified in all cases. Our multiobjective procedure runs in only a few minutes for each of the International Symposium on Physical Design 2008 benchmarks, even the unroutable ones, which imposes a tolerable overhead in the design flow. In our simulations, we demonstrate the effectiveness of our procedure using five modern academic global routers.
IEEE Transactions on Very Large Scale Integration (VLSI) Systems 01/2013; 21(7):1308-1321. · 1.22 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This paper provides an overview of the architecture for self-organizing, co-operative and robust Building Automation Systems (BAS) proposed by the EC funded FP7 SCUBA1 project. We describe the current situation in monitoring and control systems and outline the typical stakeholders involved in the case of building automation systems. We derive seven typical use cases which will be demonstrated and evaluated on pilot sites. From these use cases the project designed an architecture relying on six main modules that realize the design, commissioning and operation of self-organizing, co-operative, robust BAS.
Industrial Electronics Society, IECON 2013 - 39th Annual Conference of the IEEE; 01/2013
[Show abstract][Hide abstract] ABSTRACT: Software plays an increasingly important role in modern embedded systems, leading to a rapid increase in design complexity. Model-driven exploration of design alternatives leads to shorter, more predictable development times and better controlled product quality.
Proceedings of the 10th international conference on Formal Modeling and Analysis of Timed Systems; 09/2012
[Show abstract][Hide abstract] ABSTRACT: Wireless sensor networks are typically operating in a dynamic context where events, such as moving sensor nodes and changing external interference, constantly impact the quality-of-service of the network. We present a distributed feedback control mechanism that actively balances multiple conflicting network-wide quality metrics, such as power consumption and end-to-end packet latency, for a heterogeneous wireless sensor network operating in a dynamic context. Nodes constantly decide if and how to adapt controllable parameters of the entire protocol stack, using sufficient information of the current network state. Using experiments with an actual deployment we show that our controller allows to maintain the required network-wide quality-of-service, with up to 30% less power consumed, compared to the most applicable (re-)configuration approaches.
Proceedings of the 2012 15th Euromicro Conference on Digital System Design; 09/2012
[Show abstract][Hide abstract] ABSTRACT: Multirate digital signal processing (DSP) algorithms are often modeled with synchronous dataflow graphs (SDFGs). A lower iteration period implies a faster execution of a DSP algorithm. Retiming is a simple but efficient graph transformation technique for performance optimization, which can decrease the iteration period without affecting functionality. In this paper, we deal with two problems: feasible retiming—retiming a SDFG to meet a given iteration period constraint, and optimal retiming—retiming a SDFG to achieve the smallest iteration period. We present a novel algorithm for feasible retiming and based on that one, a new algorithm for optimal retiming, and prove their correctness. Both methods work directly on SDFGs, without explicitly converting them to their equivalent homogeneous SDFGs. Experimental results show that our methods give a significant improvement compared to the earlier methods.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 06/2012; 31(6):831-844. · 1.09 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This paper presents an exact method and a heuristic method for static rate-optimal multiprocessor scheduling of real-time multi rate DSP algorithms represented by synchronous data flow graphs (SDFGs). Through exploring the state-space generated by a self-timed execution (STE) of an SDFG, a static rate-optimal schedule via explicit retiming and implicit unfolding can be found by our exact method. By constraining the number of concurrent firings of actors of an STE, the number of processors used in a schedule can be limited. Using this, we present a heuristic method for processor-constrained rate-optimal scheduling of SDFGs. Both methods do not explicitly convert an SDFG to its equivalent homogenous SDFG. Our experimental results show that the exact method gives a significant improvement compared to the existing methods, our heuristic method further reduces the number of processors used.
18th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2012）, Beijing, China; 04/2012
[Show abstract][Hide abstract] ABSTRACT: Scenario-aware dataflow graphs (SADFs) efficiently model dynamic applications. The throughput of an application is an important metric to determine the performance of the system. For example, the number of frames per second output by a video decoder should always stay above a threshold that determines the quality of the system. During design-space exploration (DSE) or run-time management (RTM), numerous throughput calculations have to be performed. Throughput calculations have to be performed as fast as possible. For synchronous dataflow graphs (SDFs), a technique exists that extracts throughput expressions from a parameterized SDF in which the execution time of the tasks (actors) is a function of some parameters. Evaluation of these expressions can be done in a negligible amount of time and provides the throughput for a specific set of parameter values. This technique is not applicable to SADFs. In this paper, we present a technique, based on Max-Plus automata, that finds throughput expressions for a parameterized SADF. Experimental evaluation shows that our technique can be applied to realistic applications. These results also show that our technique is better scalable and faster compared to the available parametric throughput analysis technique for SDFs.
Computer Design (ICCD), 2012 IEEE 30th International Conference on; 01/2012
[Show abstract][Hide abstract] ABSTRACT: Synchronous dataflow graphs (SDFGs) are used extensively to model streaming applications. An SDFG can be extended with scheduling decisions, allowing SDFG analysis to obtain properties like throughput or buffer sizes for the scheduled graphs. Analysis times depend strongly on the size of the SDFG. SDFGs can be statically scheduled using static-order schedules. The only generally applicable technique to model a static-order schedule in an SDFG is to convert it to a homogeneous SDFG (HSDFG). This conversion may lead to an exponential increase in the size of the graph and to sub-optimal analysis results (e.g., for buffer sizes in multi-processors). We present a technique to model periodic static-order schedules directly in an SDFG. Experiments show that our technique produces more compact graphs compared to the technique that relies on a conversion to an HSDFG. This results in reduced analysis times for performance properties and tighter resource requirements.