ACM Transactions on Sensor Networks

Published by Association for Computing Machinery
Online ISSN: 1550-4859
Publications
Article
We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding, and aligning uniquely realizable subsets of neighboring sensors called patches. In the noise-free case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation, and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity, and running time. While our approach is applicable to higher dimensions, in the current article, we focus on the two-dimensional case.
 
The distance metric between two paths P, P . In this figure we adopt a uniform parametrization and the samples are placed uniformly on the paths.  
The real path P taken by a packet from s is different from the path P it should have taken if it were generated from the claimed location s .
As the distance between claimed and true location increases so does the difference between resulting hash values.  
The beliefs generated by each node along the path, as the adversary claimed distance increases.  
The percentage of the number of packets accepted by the sink node with respect to distance claim of node in terms of the number of hops away it is from true location. The beliefs received at the sink must be above the given threshold value to be accepted.  
Conference Paper
Location information is of essential importance in sensor networks deployed for generating location-specific event reports. When such networks operate in hostile environments, it becomes imperative to guarantee the correctness of event location claims. In this article we address the problem of assessing location claims of untrusted (potentially compromised) nodes. The mechanisms introduced here prevent a compromised node from generating illicit event reports for locations other than its own. This is important because by compromising “easy target” sensors (say, sensors on the perimeter of the field that's easier to access), the adversary should not be able to impact data flows associated with other (“premium target”) regions of the network. To achieve this goal, in a process we call location certification , data routed through the network is “tagged” by participating nodes with “belief” ratings, collaboratively assessing the probability that the claimed source location is indeed correct. The effectiveness of our solution relies on the joint knowledge of participating nodes to assess the truthfulness of claimed locations. By collaboratively generating and propagating a set of “belief” ratings with transmitted data and event reports, the network allows authorized parties (e.g., final data sinks) to evaluate a metric of trust for the claimed location of such reports. Belief ratings are derived from a data model of observed past routing activity. The solution is shown to feature a strong ability to detect false location claims and compromised nodes. For example, incorrect claims as small as 2 hops (from the actual location) are detected with over 90% accuracy. Finally, these new location certification mechanisms can be deployed in tandem with traditional secure localization, yet do not require it, and, in a sense, can serve to minimize the need thereof.
 
Arguing about the covered area.
Conference Paper
Several routing schemes in ad hoc networks first establish a virtual backbone and then route messages via back-bone nodes. One common way of constructing such a backbone is based on the construction of a minimum connected dominating set (CDS). In this paper we present a very simple distributed algorithm for computing a small CDS. Our algorithm has an approximation factor of at most 6.91, improving upon the previous best known approximation factor of 8 due to Wan et al. [INFOCOM'02], The improvement relies on a refined analysis of the relationship between the size of a maximal independent set and a minimum CDS in a unit disk graph. This subresult also implies improved approximation factors for many existing algorithm.
 
Conference Paper
Low duty cycle operation is critical to conserve energy in wireless sensor networks. Traditional wake-up scheduling approaches either require periodic synchronization messages or incur high packet delivery latency due to the lack of any synchronization. In this paper, we present the design of a new low duty-cycle MAC layer protocol called Convergent MAC (CMAC). CMAC avoids synchronization overhead while supporting low latency. By using zero communication when there is no traffic, CMAC allows operation at very low duty cycles. When carrying traffic, CMAC first uses any cast to wake up forwarding nodes, and then converges from route-suboptimal any cast with unsynchronized duty cycling to route-optimal unicast with synchronized scheduling. To validate our design and provide a usable module for the community, we implement CMAC in TinyOS and evaluate it on the Kansei testbed consisting of 105 XSM nodes. The results show that CMAC at 1% duty cycle significantly outperforms BMAC at 1% in terms of latency, throughput and energy efficiency. We also compare CMAC with other protocols using simulations. The results show for 1% duty cycle, CMAC exhibits similar throughput and latency as CSMA/CA using much less energy, and outperforms SMAC and GeRaF in all aspects.
 
One-dimensional node placement.  
Deriving the Rmax constraint in 2d placement. For any of the Voronoi cells, the furthest points are the corners of the cell.  
As the number of nodes per-spoke (n radial ) is increased, gains initially improve but eventually ¯ Rmax is too constrained, hence gains reduce.  
As the number of nodes placed in the network is increased, the gains increase.  
The power gains at the bottleneck node are very large, with 50 times improvement for large N .  
Conference Paper
We consider the joint optimization of sensor placement and transmission structure for data gathering, where a given number of nodes need to be placed in a field such that the sensed data can be reconstructed at a sink within specified distortion bounds while minimizing the energy consumed for communication. We assume that the nodes use joint entropy coding based on explicit communication between sensor nodes, and consider both maximum and average distortion bounds. The optimization is complex since it involves an interplay between the spaces of possible transmission structures given radio reachability limitations, and feasible placements satisfying distortion bounds. We address this problem by first looking at the simplified problem of optimal placement in the one-dimensional case. An analytical solution is derived for the case when there is a simple aggregation scheme, and numerical results are provided for the cases when joint entropy encoding is used. We use the insight from our 1-D analysis to extend our results to the 2-D case, and show that our algorithm for two-dimensional placement and transmission structure provides significant power benefit over a commonly used combination of uniformly random placement and shortest path trees.
 
The Voronoi cell of a sensor node y is enclosed inside a disk of radius 2 and contains a disk of radius 1/2.
Total communication cost in grid networks with various size.
Coverage of phase 4.
Propagation of one piece of data of the node located in the center of the field.
The spiral used for response for a given query region . Nodes are visited individually in the shaded region at the perimeter. The figure also shows the maximal square B i (x) for a node x of maximal level i, and the corresponding pollution region G i (x).  
Conference Paper
In this paper we propose a lightweight algorithm for constructing multi-resolution data representations for sensor networks. We compute, at each sensor node u, O(log n) aggregates about exponentially enlarging neighborhoods centered at u. The ith aggregate is the aggregated data among nodes approximately within 2<sup>1</sup> hops of u. We present a scheme, named the hierarchical spatial gossip algorithm, to extract and construct these aggregates, for all sensors simultaneously, with a total communication cost of 0(n polylog n). The hierarchical gossip algorithm adopts atomic communication steps with each node choosing to exchange information with a node distance d away with probability 1/d<sup>3</sup>. The attractiveness of the algorithm attributes to its simplicity, low communication cost, distributed nature and robustness to node failures and link failures. Besides the natural applications of multi-resolution data summaries in data validation and information mining, we also demonstrate the application of the pre-computed spatial multi-resolution data summaries in answering range queries efficiently.
 
Conference Paper
Conserving the energy for motion is an important yet not-well-addressed problem in mobile sensor networks. In this paper, we study the problem of optimizing sensor movement for energy efficiency. We adopt a complete energy model to characterize the entire energy consumption in movement. Based on the model, we propose an optimal velocity schedule for minimizing energy consumption when the road condition is uniform; and a near optimal velocity schedule for the variable road condition by using continuous-state dynamic programming. Considering the variety in motion hardware, we also design one velocity schedule for simple microcontrollers, and one velocity schedule for relatively complex microcontrollers, respectively. Simulation results show that our velocity planning may have significant impact on energy conservation.
 
Voronoi-Laguerre polygon partially covered by its generating sensor. 
Iterative reduction of the sensing radius of sensor s 1 to the farthest vertex of its Voronoi-Laguerre polygon. 
Strict (a) and loose (b) farthest vertices 
Reduction of the sensing radius in a situation of loose boundary farthest vertex. 
About Pareto optimality. Initial configuration (a). Selective activation with DLM (b) and SARA (c). The nodes with double circle are awake, while the other ones are sleeping. 
Article
In order to prolong the lifetime of a wireless sensor network (WSN) devoted to monitoring an area of interest, a useful means is to exploit network redundancy, activating only the sensors that are strictly necessary for coverage and making them work with the minimum necessary sensing radius. In this article, we introduce the first algorithm that reduces sensor coverage redundancy through joint Sensor Activation and sensing Radius Adaptation (SARA) in general application scenarios comprising two classes of devices: sensors with variable sensing radius and sensors with fixed sensing radius. This device heterogeneity is explicitly addressed by modeling the coverage problem through Voronoi-Laguerre diagrams that, differently from Voronoi diagrams, allow for correctly identifying each sensor coverage region depending on the sensor current radius and the radii of its neighboring nodes. SARA executes quickly with guaranteed termination and, given the currently available nodes, it always guarantees maximum coverage. By means of extensive simulations, we show that SARA obtains remarkable improvements with respect to previous solutions, ensuring, in networks with heterogeneous nodes, longer network lifetime and wider coverage.
 
Article
We consider a small extent sensor network for event detection, in which nodes take samples periodically and then contend over a {\em random access network} to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center to process the measurements. The Bayesian setting is assumed; i.e., the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is \emph{network-oblivious} and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is \emph{network-aware} and processes measurements as they arrive, but in a time causal order. In this case, the decision statistic depends on the network delays as well, whereas in the network-oblivious case, the decision statistic does not depend on the network delays. This yields a Bayesian change detection problem with a tradeoff between the random network delay and the decision delay; a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network--oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network--aware case, the optimal stopping problem is analysed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient statistic for decision is found to be the network-state and the posterior probability of change having occurred given the measurements received and the state of the network. The optimal regimes are studied using simulation.
 
Stretch 2 Region Max Radius Points Edges Removed Edges Required Stretch Savings 
Stretch 3 Region Max Radius Points Edges Removed Edges Required Stretch Savings 
A geometric spanner edge and its possible replacement path
Conference Paper
This paper presents an algorithm for constructing a spanner for ad hoc networks whose nodes have variable transmission range. Almost all previous spanner constructions for ad hoc networks assumed that all nodes in the network have the same transmission range. This allowed a succinct representation of the network as a unit disk graph, serving as the basis for the construction. In contrast, when nodes have variable transmission range, the ad hoc network must be modeled by a general disk graph. Whereas unit disk graphs are undirected, general disk graphs are directed. This complicates the construction of a spanner for the network, since currently there are no efficient constructions of low-stretch spanners for general directed graphs. Nevertheless, in this paper it is shown that the class of disk graphs enjoys (efficiently constructible) spanners of quality similar to that of unit disk graph spanners. Moreover, it is shown that the new construction can be done in a localized fashion.
 
Article
We study the power-aware buffering problem in battery-powered sensor networks, focusing on the fixed-size and fixed-interval buffering schemes. The main motivation is to address the yet poorly understood size variation-induced effect on power-aware buffering schemes. Our theoretical analysis elucidates the fundamental differences between the fixed-size and fixed-interval buffering schemes in the presence of data size variation. It shows that data size variation has detrimental effects on the power expenditure of the fixed-size buffering in general, and reveals that the size variation induced effects can be either mitigated by a positive skewness or promoted by a negative skewness in size distribution. By contrast, the fixed-interval buffering scheme has an obvious advantage of being eminently immune to the data-size variation. Hence the fixed-interval buffering scheme is a risk-averse strategy for its robustness in a variety of operational environments. In addition, based on the fixed-interval buffering scheme, we establish the power consumption relationship between child nodes and parent node in a static data collection tree, and give an in-depth analysis of the impact of child bandwidth distribution on parent's power consumption. This study is of practical significance: it sheds new light on the relationship among power consumption of buffering schemes, power parameters of radio module and memory bank, data arrival rate and data size variation, thereby providing well-informed guidance in determining an optimal buffer size (interval) to maximize the operational lifespan of sensor networks.
 
Conference Paper
Recent work has shown that rising temperatures are increasing failures and reducing integrated circuit reliability. Although such results have prompted development of thermal management policies for stand-alone processors and on distributed power management, there is an overall lack of research on thermal management policies and their tradeoffs in sensor networks where sensors can overheat due to excessive sampling. Our primary focus in this paper is to examine the relationship between sampling, number of sensors, sensor node temperature, and state estimation error. We devise a scheduling algorithm which can achieve a desired real-time performance constraint while maintaining a thermal limit on temperature at all nodes in a network. Analytical results and experimentation are done for estimation with a Kalman filter for simplicity, but our main contributions should easily extend to any form of estimation with measurable error.
 
Article
We consider the problem of fusing measurements from multiple sensors, where the sensing regions overlap and data are non-negative---possibly resulting from a count of indistinguishable discrete entities. Because of overlaps, it is, in general, impossible to fuse this information to arrive at an accurate estimate of the overall amount or count of material present in the union of the sensing regions. Here we study the range of overall values consistent with the data. Posed as a linear programming problem, this leads to interesting questions associated with the geometry of the sensor regions, specifically, the arrangement of their non-empty intersections. We define a computational tool called the fusion polytope and derive a condition for this to be in the positive orthant thus simplifying calculations. We show that, in two dimensions, inflated tiling schemes based on rectangular regions fail to satisfy this condition, whereas inflated tiling schemes based on hexagons do.
 
Article
Participatory sensing is a powerful paradigm that leverages information sent by smartphone users to collect fine-grained information on events of interest. Given participatory sensing applications rely completely on the users' willingness to submit up-to-date and reliable information, it is paramount to effectively incentivize users' active and reliable participation. In this paper, we survey existing literature on incentive mechanisms for participatory sensing systems. In particular, we present a systematic and comprehensive taxonomy of existing incentive mechanisms for participatory sensing systems, which are subsequently discussed and compared in depth. Finally, we discuss open problems and future research directions.
 
Multiple BANs coexistence probability
BCH coding gain
Simulated SINR vs. Approximated SINR
Article
In this paper, we enable the coexistence of multiple wireless body area networks (BANs) using a finite repeated non-cooperative game for transmit power control. With no coordination amongst these personal sensor networks, the proposed game maximizes each network's packet delivery ratio (PDR) at low transmit power. In this context we provide a novel utility function, which gives reduced benefit to players with higher transmission power, and a subsequent reduction in radio interference to other coexisting BANs. Considering the purpose of inter-BAN interference mitigation, PDR is expressed as a compressed exponential function of inverse signal-to-interference-and-noise ratio (SINR), so it is essentially a function of transmit powers of all coexisting BANs. It is shown that a unique Nash Equilibrium (NE) exists, and hence there is a subgame-perfect equilibrium, considering best-response at each stage independent of history. In addition, the NE is proven to be the socially optimal solution across all action profiles. Realistic and extensive on- and inter-body channel models are employed. Results confirm the effectiveness of the proposed scheme in better interference management, greater reliability and reduced transmit power, when compared with other schemes that can be applied in BANs.
 
Article
Many analytic results for the connectivity, coverage, and capacity of wireless networks have been reported for the case where the number of nodes, $n$, tends to infinity (large-scale networks). The majority of these results have not been extended for small or moderate values of $n$; whereas in many practical networks, $n$ is not very large. In this paper, we consider finite (small-scale) wireless sensor networks. We first show that previous asymptotic results provide poor approximations for such networks. We provide a set of differences between small-scale and large-scale analysis and propose a methodology for analysis of finite sensor networks. Furthermore, we consider two models for such networks: unreliable sensor grids, and sensor networks with random node deployment. We provide easily computable expressions for bounds on the coverage and connectivity of these networks. With validation from simulations, we show that the derived analytic expressions give very good estimates of such quantities for finite sensor networks. Our investigation confirms the fact that small-scale networks possesses unique characteristics different from the large-scale counterparts, necessitating the development of a new framework for their analysis and design.
 
Percentage of sensors in the largest connected component for all awake WSNs adopting the lowest possible radius for connectivity, i.e., the RGG radius.  
F δ with each point shown surrounded by a box of the form B δ/2. A l , A r and A t are also shown.
Article
We investigate the condition on transmission radius needed to achieve connectivity in duty-cycled wireless sensor networks (briefly, DC-WSN). First, we settle a conjecture of Das et. al. (2012) and prove that the connectivity condition on Random Geometric Graphs (RGG), given by Gupta and Kumar (1989), can be used to derive a weak sufficient condition to achieve connectivity in DC-WSN. To find a stronger result, we define a new vertex-based random connection model which is of independent interest. Following a proof technique of Penrose (1991) we prove that when the density of the nodes approaches infinity then a finite component of size greater than 1 exists with probability 0 in this model. We use this result to obtain an optimal condition on node transmission radius which is both necessary and sufficient to achieve connectivity and is hence optimal. The optimality of such a radius is also tested via simulation for two specific duty-cycle schemes, called the contiguous and the random selection duty-cycle scheme. Finally, we design a minimum-radius duty-cycling scheme that achieves connectivity with a transmission radius arbitrarily close to the one required in Random Geometric Graphs. The overhead in this case is that we have to spend some time computing the schedule.
 
Conference Paper
We consider the problem of clock synchronization in a wireless setting where processors must minimize the number of times their radios are used, in order to save energy. Energy efficiency is a central goal in wireless networks, especially if energy resources are severely limited, as occurs in sensor and ad-hoc networks, and in many other settings. The problem of clock synchronization is fundamental and intensively studied in the field of distributed algorithms. In the current setting, the problem is to synchronize clocks of m processors that wake up in arbitrary time points, such that the maximum difference between wake up times is bounded by a positive integer n. (Time intervals are appropriately discretized to allow communication of all processors that are awake in the same discrete time unit.) Currently, the best-known results for synchronization for single-hop networks of m processors is a randomized algorithm due to Bradonjic, Kohler and Ostrovsky [2] of \(O\left(\sqrt{n /m} \cdot \mbox{\em poly-log}(n)\right)\) radio-use times per processor, and a lower bound of \(\Omega\left(\sqrt{n/m}\right)\). The main open question left in their work is to close the poly-log gap between the upper and the lower bound and to de-randomize their probabilistic construction and eliminate error probability. This is exactly what we do in this paper. That is, we show a deterministic algorithm with radio use of \(\Theta\left(\sqrt{n /m}\right)\), which exactly matches the lower bound proven in [2], up to a small multiplicative constant. Therefore, our algorithm is optimal in terms of energy efficiency and completely resolves a long sequence of works in this area [2, 11–14]. Moreover, our algorithm is optimal in terms of running time as well. In order to achieve these results we devise a novel adaptive technique that determines the times when devices power their radios on and off. This technique may be of independent interest.
 
Flow of Calibration Algorithm
Springbrook 2011 Fall Data (a value chosen based on the amount of existing data, ideally this would increase as more data arrives), starting with the scenario where the model uses environmental variables and no neighbors' information. To the measured values, we also add the three possibilities: (1) recalibrating when the error exceeds a threshold and at least four days of new data exist in the matrix, (2) including the prediction error in the calibration matrix, and (3) including the first derivative of the solar current in the calibration matrix. We do not know which other variables most correlate to the solar current so vary all parameters and run the model over each possibility. To determine which combination provides the best prediction, we evaluate the predicted time series using the root mean square error (RMSE) between the predicted and observed as well as the largest absolute error value. Table III outlines the combinations with the best results over both metrics. To understand the bounds and convergence of the model, we compute the mean of the error between observed and predicted (also known as the mean residual) and the 95% confidence interval around this residual. The confidence interval provides a probabilistic bound on how much the residual will vary from the mean, allowing us to provide probabilistic bounds on the error and the convergence of the model. Equation 4 outlines the computation of the confidence interval [Ramsey and Schafer 2002].
Springbrook 2011 Deployment with Cluster Nodes Indicated by Node ID
Results of Spatial Analysis: Note that bars represent RMSE values and the dotted line represents Max Error
Article
Long-term sensor network deployments demand careful power management. While managing power requires understanding the amount of energy harvestable from the local environment, current solar prediction methods rely only on recent local history, which makes them susceptible to high variability. In this paper, we present a model and algorithms for distributed solar current prediction, based on multiple linear regression to predict future solar current based on local, in-situ climatic and solar measurements. These algorithms leverage spatial information from neighbors and adapt to the changing local conditions not captured by global climatic information. We implement these algorithms on our Fleck platform and run a 7-week-long experiment validating our work. In analyzing our results from this experiment, we determined that computing our model requires an increased energy expenditure of 4.5mJ over simpler models (on the order of 10^{-7}% of the harvested energy) to gain a prediction improvement of 39.7%.
 
Article
This paper presents LIPS, a Light Intensity based Positioning System for indoor environments. The system uses off-the-shelf LED lamps as signal sources, and uses light sensors as signal receivers. The design is inspired by the observation that a light sensor has deterministic sensitivity to both distance and incident angle of light signal, an under-utilized feature of photodiodes now widely found on mobile devices. We develop a stable and accurate light intensity model to capture the phenomenon, based on which a new positioning principle, Multi-Face Light Positioning (MFLP), is established that uses three collocated sensors to uniquely determine the receiver's position, assuming merely a single source of light. We have implemented a prototype on both dedicated embedded systems and smartphones. Experimental results show average positioning accuracy within 0.4 meters across different environments, with high stability against interferences from obstacles, ambient lights, temperature variation, etc.
 
Article
The efficacy of data aggregation in sensor networks is a function of the degree of spatial correlation in the sensed phenomenon. While several data aggregation (i.e., routing with data compression) techniques have been proposed in the literature, an understanding of the performance of various data aggregation schemes across the range of spatial correlations is lacking. We analyze the performance of routing with compression in wireless sensor networks using an applicationindependent measure of data compression (an empirically obtained approximation for the joint entropy of sources as a function of the distance between them) to quantify the size of compressed information, and a bit-hop metric to quantify the total cost of joint routing with compression. Analytical modeling and simulations reveal that while the nature of optimal routing with compression does depend on the correlation level, surprisingly, there exists a practical static clustering scheme which can provide near-optimal performance for a wide range of spatial correlations. This result is of great practical significance as it shows that a simple cluster-based system design can perform as well as sophisticated adaptive schemes for joint routing and compression.
 
Article
protocol for sensor networks that is designed to support in-network processing, while at the same time restricting the security impact of a node compromise to the immediate network neighborhood of the compromised node. The design of the protocol is motivated by the observation that different types of messages exchanged between sensor nodes have different security requirements, and that a single keying mechanism is not suitable for meeting these different security requirements. LEAP supports the establishment of four types of keys for each sensor node -- an individual key shared with the base station, a pairwise key shared with another sensor node, a cluster key shared with multiple neighboring nodes, and a group key that is shared by all the nodes in the network. The protocol used for establishing and updating these keys is communication- and energy-efficient, and minimizes the involvement of the base station. LEAP also includes an efficient protocol for inter-node traffic authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding in-network processing and passive participation. We analyze the performance and the security of our scheme under various attack models and show our schemes are very efficient in defending against many attacks.
 
Article
In this article, we address the problem of target detection in Wireless Sensor Networks (WSN). We formulate the target detection problem as a line-set intersection problem and use integral geometry to analytically characterize the probability of target detection for both stochastic and deterministic deployments. Compared to previous work, we analyze WSNs where sensors have heterogeneous sensing capabilities. For the stochastic case, we evaluate the probability that the target is detected by at least k sensors and compute the free path until the target is flrst detected. For the deterministic case, we show an analogy between the target detection problem and the problem of minimizing the average symbol error probability in 2-dimensional digital modulation schemes. Motivated by this analogy, we propose a heuristic sensor placement algorithm called DATE, that makes use of well known signal constellations for determining good WSN constella- tions. We also propose a heuristic called CDATE for connected WSN constellations, that yields high target detection probability.
 
Article
A formal treatment to the security of Concealed Data Aggregation (CDA) and the more general Private Data Aggregation (PDA) is given. While there exist a handful of constructions, rigorous security models and analyses for CDA or PDA are still lacking. Standard security notions for public key encryption, including semantic security and indistinguishability against chosen ciphertext attacks, are refined to cover the multisender nature and aggregation functionality of CDA and PDA in the security model. The proposed security model is sufficiently general to cover most application scenarios and constructions of privacy-preserving data aggregation. An impossibility result on achieving security against adaptive chosen ciphertext attacks in CDA/PDA is shown. A generic CDA construction based on public key homomorphic encryption is given, along with a proof of its security in the proposed model. The security of a number of existing schemes is analyzed in the proposed model.
 
Conference Paper
This paper presents a Connectivity-based and Anchor-free Three-dimensional Localization (CATL) scheme for large-scale sensor networks with concave regions. It distinguishes itself from previous work with a combination of three features: (1) it works for networks in both 2D and 3D spaces, possibly containing holes or concave regions; (2) it is anchor-free, and uses only connectivity information to faithfully recover the original network topology, up to scaling and rotation; (3) it does not depend on the knowledge of network boundaries, which suits it well to situations where boundaries are difficult to identify. The key idea of CATL is to discover the notch nodes, where shortest paths bend and hop-count-based distance starts to significantly deviate from the true Euclidean distance. An iterative protocol is developed that uses a em notch-avoiding multilateration mechanism to localize the network. Simulations show that CATL achieves accurate localization results with a moderate per-node message cost.
 
Article
The recent ratification of IEEE 802.15.4 PHY-MAC specifications for low-rate wireless personal area networks represents a significant milestone in promoting deployment of wireless sensor net- works (WSNs) for a variety of commercial uses. The 15.4 specifications specifically target wireless networking among low rate, low power and low cost devices that is expected to be a key market segment for a large number of WSN applications. In this paper, we first analyze the performance of the contention access period specified in IEEE 802.15.4 standard, in terms of throughput and energy consumption. This analysis is facilitated by a modeling of the contention access period as non-persistent CSMA with backo . We show that in certain applications, in which having an inactive period in the superframe may not be desirable due to delay constraints, shutting down the radio between transmissions provides significant savings in power without significantly com- promising the throughput. We also propose and analyze the performance of a modification to the specification which could be used for applications in which MAC-level acknowledgements are not used. Extensive ns-2 simulations are used to verify the analysis.
 
Article
answering a user's questions requires identifying the correct set of nodes that can answer the question and enabling coordination between them. In this paper, we propose a query domain abstraction that allows an application to dynamically specify the nodes best suited to answering a particular query. Selecting the ideal set of heterogeneous sensors entails answering two fundamental questions — how are the selected sensors related to one another, and where should the resulting sensor coalition be located. We introduce two abstractions, the proximity function and the reference function, to precisely specify each of these concerns within a query. All nodes in the query domain must satisfy any provided proximity function, a user- defined function that constrains the relative relationship among the group of nodes (e.g., based on a property of the network or physical environment or on logical properties of the nodes). The selected set of nodes must also satisfy any provided reference function, a mechanism to scope the location of the query domain to a specified area of interest (e.g., within a certain distance from a specified reference point). In this paper, we model these abstractions and present a set of protocols that accomplish this task with varying degrees of correctness. We evaluate their performance through simulation and highlight the tradeos between protocol overhead and correctness.
 
Conference Paper
Energy efficiency is a fundamental issue for outdoor sensor network systems. This article presents the design and implementation of multidimensional power management strategies in VigilNet, a major recent effort to support long-term surveillance using power-constrained sensor devices. A novel tripwire service is integrated with an effective sentry and duty cycle scheduling in order to increase the system lifetime, collaboratively. The tripwire service partitions a network into distinct, nonoverlapping sections and allows each section to be scheduled independently. Sentry scheduling selects a subset of nodes, the sentries, which are turned on while the remaining nodes save energy. Duty cycle scheduling allows the active sentries themselves to be turned on and off, further lowering the average power draw. The multidimensional power management strategies proposed in this article were fully implemented within a real sensor network system using the XSM platform. We evaluate key system parameters using a network of 200 XSM nodes in an outdoor environment, and an analytical probabilistic model. We evaluate network lifetime using a simulation of a 10,000-node network that uses measured XSM power values. These evaluations demonstrate the effectiveness of our integrated approach and identify a set of lessons and guidelines, useful for the future development of energy-efficient sensor systems. One of the key results indicates that the combination of the three presented power management techniques is able to increase the lifetime of a realistic network from 4 days to 200 days.
 
Article
We present a low-power VLSI wake-up detector for a sensor network that uses acoustic signals to localize ground-based vehicles. The detection criterion is the degree of low-frequency periodicity in the acoustic signal, and the periodicity is computed from the “bumpiness” of the autocorrelation of a one-bit version of the signal. We then describe a CMOS ASIC that implements the periodicity estimation algorithm. The ASIC is fully functional and its core consumes 835 nanowatts. It was integrated into an acoustic enclosure and deployed in field tests with synthesized sounds and ground-based vehicles.
 
Payoff table used in determining decision threshold that maximizes payoff. 
Controlled Experiments: The Setup in the anechoic chamber. On the left was the transmitter/receiver setup and the right figure shows a reflecting surface whose location was varied in our experiments.
Uncontrolled Experiments: Uncontrolled experiments were performed at two locations. The lab/office location provided for in-air experiments, while the test setup off the docks in Marina del Rey harbor provided for underwater experiments.
Conference Paper
The principles of sensor networks—low-power, wire- less, in-situ sensing with many inexpensive sensors—are only recently penetrating into underwater research. Acous- tic communication is best suited for underwater commu- nication, since other methods (optical and radio) attenuate very quickly. Yet acoustic propagation is five orders-of- magnitude slower than RF, so propagation times stretch to hundreds of milliseconds. A new generation of underwa- ter acoustic modems have added low-power wakeup tones that combat the energy consumption acoustic modems would waste on idle listening. Recently, these tones have been used as an integral part of application layer and MAC pro- tocols. While all wireless data-networks suffer from mul- tipath interference of received data, in this paper, we show that due to large acoustic propagation delay tone echoes cause a unique interference, tone self-multipath, for tone- based protocols. To address this interference we introduce Self-Reflection Tone Learning (SRTL), a novel approach where nodes use Bayesian techniques to discriminate self- reflections from noise and communication from other nodes. We present detailed experiments using an acoustic modem in two physical environments to show that SRTL's knowl- edge corresponds to physical-world predictions, that it can cope with reasonable levels of noise, and that it can track a changing multi-path environment. Simulations confirm that these real-world experiments generalize over a wide range of conditions.
 
Article
We present a statistical method that uses prediction modeling to decrease the temporally redundant data transmitted back to the sink. The major novelties are fourfold: First, a prediction model is fit to the sensor data. Second, prediction error is utilized to adaptively update the model parameters using hypothesis testing. Third, a data transformation is proposed to bring the sensor sample series closer to weak stationarity. Finally, an efficient implementation is presented. We show that our proposed preDiction eRror bASed hypoThesis testInG (DRASTIG) method achieves low energy dissipation while keeping the prediction errors at user-defined tolerable magnitudes based on real data experiments.
 
Article
Wireless sensor network (WSN) applications have been studied extensively in recent years. Such applications involve resource-limited embedded sensor nodes that have small size and low power requirements. Based on the need for extended network lifetimes in WSNs in terms of energy use, the energy efficiency of computation and communication operations in the sensor nodes becomes critical. Digital signal processing (DSP) applications typically require intensive data processing operations and as a result are difficult to implement directly in resource-limited WSNs. In this paper, we present a novel design methodology for modeling and implementing computationally-intensive DSP applications applied to wireless sensor networks. This methodology explores efficient modeling techniques for DSP applications, including data sensing and processing; derives formula-tions of energy-driven partitioning (EDP) for distributing such applications across wireless sensor networks; and develops efficient heuristic algorithms for finding partitioning results that maximize the network lifetime. To address such an energy-driven partitioning problem, this paper provides a new way of aggregating data and reducing communication traffic among nodes based on applica-tion analysis. By considering low data token delivery points and the distribution of computation in the application, our approach finds energy-efficient trade-offs between data communication and computation.
 
Article
We address the problem of optimal node activation in a sensor network, where the optimization objective is represented as a global time-average utility function over the deployment area of the network. Each sensor node is rechargeable, and can hold up to K quanta of energy. When the recharge and/or discharge processes in the network are random, the problem of optimal sensor activation is a complex stochastic decision question. For the case of identical sensor coverages, we show the existence of a simple threshold policy which is asymptotically optimal with respect to the energy bucket size K, i.e., the performance of this threshold policy approaches the optimal performance as K becomes large. We also show that the performance of the optimal threshold policy is robust to the degree of spatial correlation in the discharge and/or recharge processes. We then extend this approach to a general sensor network where coverage areas of different sensors could have complete, partial or no overlap with each other. We demonstrate through simulations that a local information based threshold policy, with an appropriately chosen threshold, achieves a performance which is very close to the global optimum.
 
Conference Paper
We consider a sensor network employing sensor nodes that have been placed in specific locations. An area phenomenon is detected and tracked by the activated sensors. The area phenomenon is modelled to consist of K spatially distributed point phenomena. The activated sensors collect data samples characterizing the parameters of the involved component point phenomena. They compress the observed data readings and transport them to a processing center. The center processes the received data to derive estimates of the component point phenomena's parameters. Our sensing stochastic process models account for distance dependent observation noise perturbations as well as for location dependent observation noise correlations. At the processing center, sample mean calculations are used to derive estimates of the underlying area phenomenon's parameters. We develop a computationally efficient algorithm for determining the specific set of sensors selected for activation under capacity and energy resource constraints, so that a sufficiently low reproduction distortion level is attained. We demonstrate our algorithm to yield distortion levels that are quite close to those characterized by a lower bound function.
 
Crack detection using active sensing in structural health monitoring.  
Detectability of DASNs: (a) cannot be detected; (b) and (c) are not distinguishable  
Detection of obstacles. Obstacles are shown as shaded. Solid dark lines are failued sites. Light lines are non-failed sites. Dashes lines are Voronoi edges. Blue cirlces give the maximal cicles cutting failed sites but none other. Obstacles are generated with average size 0.3 of the exposure  
Article
Distributed active sensing is a new sensing paradigm, where active sensors (a.k.a actuators) as illuminating sources and passive sensors as receivers are distributed in a field, and collaboratively detect interested events. In this paper, we study the fundamental properties of distributed actuator and sensor networks (DASNs) in detecting and localizing obstacles. A novel notion of "exposure" is defined, which quantifies the dimension limitations in detectability. Using simple geometric constructs, we propose polynomial-time algorithms to compute the exposure and bounding regions where the center of the obstacles may lie.
 
Article
We discuss how to automatically obtain the metric calibration of an ad hoc network of cameras with no centralized processor. We model the set of uncalibrated cameras as nodes in a communication network, and propose a distributed algorithm in which each camera performs a local, robust bundle adjustment over the camera parameters and scene points of its neighbors in an overlay “vision graph.” We analyze the performance of the algorithm on both simulated and real data, and show that the distributed algorithm results in a fairer allocation of messages per node while achieving comparable calibration accuracy to centralized bundle adjustment.
 
Article
We present a distributed algorithm for node localization based on the Gauss-Newton method. In this algorithm, each node updates its own location estimate using the pairwise distance measurements and the local information it receives from the neighboring nodes. Once the location estimate is updated, the sensor node broadcasts the updated estimate to all the neighboring nodes. A distributed and scalable local scheduling algorithm for updating nodes in the network is presented to avoid the use of the global coordinator or a routing loop. We analytically show that the proposed distributed algorithm converges under certain practical assumptions of the network. The performance of the algorithm is evaluated using both simulation and experimental results. Quantitative comparisons among different distributed algorithms are also presented.
 
Illustration of a simulated sensor network with 15×15 base sensors, and the recognized boundary (with R = 2) using a Gaussian kernel with σ = 1. See text for a detailed description.  
Article
We show that the coarse-grained and fine-grained localization problems for ad hoc sensor networks can be posed and solved as a pattern recognition problem using kernel methods from statistical learning theory. This stems from an observation that the kernel function, which is a similarity measure critical to the effectiveness of a kernel-based learning algorithm, can be naturally defined in terms of the matrix of signal strengths received by the sensors. Thus we work in the natural coordinate system provided by the physical devices. This not only allows us to sidestep the difficult ranging procedure required by many existing localization algorithms in the literature, but also enables us to derive a simple and effective localization algorithm. The algorithm is particularly suitable for networks with densely distributed sensors, most of whose locations are unknown. The computations are initially performed at the base sensors, and the computation cost depends only on the number of base sensors. The localization step for each sensor of unknown location is then performed locally in linear time. We present an analysis of the localization error bounds, and provide an evaluation of our algorithm on both simulated and real sensor networks.
 
Illustration of the synchronous spectral multiplexing algorithm.
Illustration of the synchronization mechanism.
Our Mica2 testbed consists of 30 motes that are placed on the floor. Nodes are roughly separated from each other by 2.5 feet. The sink is located at the bottom of the figure, with a programming board attached to it.  
Packet delivery time series for a 50-second time window under first normal and then jammed network conditions.  
Packet delivery time series for asynchronous spectral multiplexing.  
Article
Radio interference, whether intentional or otherwise, represents a serious threat to assuring the availability of sensor network services. As such, techniques that enhance the reliability of sensor communications in the presence of radio interference are critical. In this article, we propose to cope with this threat through a technique called channel surfing, whereby the sensor nodes in the network adapt their channel assignments to restore network connectivity in the presence of interference. We explore two different approaches to channel surfing: coordinated channel switching, in which the entire sensor network adjusts its channel; and spectral multiplexing, in which nodes in a jammed region switch channels and nodes on the boundary of a jammed region act as radio relays between different spectral zones. For coordinated channel switching, we examine an autonomous strategy where each node detects the loss of its neighbors in order to initiate channel switching. To cope with latency issues in the autonomous strategy, we propose a broadcast-assisted channel switching strategy to more rapidly coordinate channel switching. For spectral multiplexing, we have devised both synchronous and asynchronous strategies to facilitate the scheduling of nodes in order to improve network fidelity when sensor nodes operate on multiple channels. In designing these algorithms, we have taken a system-oriented approach that has focused on exploring actual implementation issues under realistic network settings. We have implemented these proposed methods on a testbed of 30 Mica2 sensor nodes, and the experimental results show that channel surfing, in its various forms, is an effective technique for repairing network connectivity in the presence of radio interference, while not introducing significant performance-overhead.
 
Article
The efficient allocation of the limited energy resources of a wireless sensor network in a way that maximizes the information value of the data collected is a significant research challenge. Within this context, this article concentrates on adaptive sampling as a means of focusing a sensor's energy consumption on obtaining the most important data. Specifically, we develop a principled information metric based upon Fisher information and Gaussian process regression that allows the information content of a sensor's observations to be expressed. We then use this metric to derive three novel decentralized control algorithms for information-based adaptive sampling which represent a trade-off in computational cost and optimality. These algorithms are evaluated in the context of a deployed sensor network in the domain of flood monitoring. The most computationally efficient of the three is shown to increase the value of information gathered by approximately 83&percnt;, 27&percnt;, and 8&percnt; per day compared to benchmarks that sample in a naïve nonadaptive manner, in a uniform nonadaptive manner, and using a state-of-the-art adaptive sampling heuristic (USAC) correspondingly. Moreover, our algorithm collects information whose total value is approximately 75&percnt; of the optimal solution (which requires an exponential, and thus impractical, amount of time to compute).
 
Architecture of the GlacsWeb network. The system is composed of sensor nodes embedded in the ice and the subglacial sediment to monitor data and transmit it to the base station positioned on the surface of the ice. The base station in turn accumulates additional information about the weather and sends it to a Reference Station (approximately 2.5km away) that has access to mains electricity and a phone connection. The data is finally uploaded to a Southampton-based server through the Internet to be accessed by glaciologists for analysis.
Example of Bayesian Kernel regression with a simple sinusoidal kernel  
Comparison of real data vs data sampled by the sensing algorithm  
Percentage of good probe packets received from Probe 8 over 16 months (10000 packets)
Article
This article reports on the development of a utility-based mechanism for managing sensing and communication in cooperative multisensor networks. The specific application on which we illustrate our mechanism is that of GlacsWeb. This is a deployed system that uses battery-powered sensors to collect environmental data related to glaciers which it transmits back to a base station so that it can be made available world-wide to researchers. In this context, we first develop a sensing protocol in which each sensor locally adjusts its sensing rate based on the value of the data it believes it will observe. The sensors employ a Bayesian linear model to decide their sampling rate and exploit the properties of the Kullback-Leibler divergence to place an appropriate value on the data. Then, we detail a communication protocol that finds optimal routes for relaying this data back to the base station based on the cost of communicating it (derived from the opportunity cost of using the battery power for relaying data). Finally, we empirically evaluate our protocol by examining the impact on efficiency of a static network topology, a dynamic network topology, the size of the network, the degree of dynamism of the environment, and the mobility of the nodes. In so doing, we demonstrate that the efficiency gains of our new protocol, over the currently implemented method over a 6 month period, are 78&percnt;, 133&percnt;, 100&percnt;, and 93&percnt;, respectively. Furthermore, we show that our system performs at 65&percnt;, 70&percnt;, 63&percnt;, and 70&percnt; of the theoretical optimal, respectively, despite being a distributed protocol that operates with incomplete knowledge of the environment.
 
Article
In this paper we consider deploying a network of static sensors to help an agent navigate in an area. In particular the agent uses range measurements to the sensors to localize itself. We wish to place the sensors in order to provide optimal localization accuracy to the agent. We begin by considering the problem of placing sensors in order to optimally localize the agent at a single location. The Position Error Bound (PEB), a lower bound on the localization accuracy, is used to measure the quality of sensor configurations. We then present RELOCATE, an iterative algorithm that places the sensors so as to minimize the PEB at that point. When the range measurements are unbiased and have constant variances, we show that RELOCATE is optimal and efficient. We then apply RELOCATE to the more complex case where the variance of the range measurements depends on the sensors location and where those measurements can be biased. We finally apply RELOCATE to the case where the PEB must be minimized not at a single point, but at multiple locations. We show that, compared to Simulated Annealing, the algorithm yields better results faster on these more realistic scenarios. We also show that by optimally placing the sensors, significant savings in terms of number of sensors used can be achieved. Finally we illustrate that the PEB is not only a convenient theoretical lower bound, but that it can actually be closely approximated by a maximum likelihood estimator.
 
Article
Wireless sensor networks (WSNs) are composed of tiny devices with limited computation and battery capacities. For such resource-constrained devices, data transmission is a very energy-consuming operation. To maximize WSN lifetime, it is essential to minimize the number of bits sent and received by each device. One natural approach is to aggregate sensor data along the path from sensors to the sink. Aggregation is especially challenging if end-to-end privacy between sensors and the sink (or aggregate integrity) is required. In this article, we propose a simple and provably secure encryption scheme that allows efficient additive aggregation of encrypted data. Only one modular addition is necessary for ciphertext aggregation. The security of the scheme is based on the indistinguishability property of a pseudorandom function (PRF), a standard cryptographic primitive. We show that aggregation based on this scheme can be used to efficiently compute statistical values, such as mean, variance, and standard deviation of sensed data, while achieving significant bandwidth savings. To protect the integrity of the aggregated data, we construct an end-to-end aggregate authentication scheme that is secure against outsider-only attacks, also based on the indistinguishability property of PRFs.
 
Article
Providing efficient data aggregation while preserving data privacy is a challenging problem in wireless sensor networks research. In this article, we present two privacy-preserving data aggregation schemes for additive aggregation functions, which can be extended to approximate MAX/MIN aggregation functions. The first scheme---Cluster-based Private Data Aggregation (CPDA)---leverages clustering protocol and algebraic properties of polynomials. It has the advantage of incurring less communication overhead. The second scheme---Slice-Mix-AggRegaTe (SMART)---builds on slicing techniques and the associative property of addition. It has the advantage of incurring less computation overhead. The goal of our work is to bridge the gap between collaborative data collection by wireless sensor networks and data privacy. We assess the two schemes by privacy-preservation efficacy, communication overhead, and data aggregation accuracy. We present simulation results of our schemes and compare their performance to a typical data aggregation scheme (TAG), where no data privacy protection is provided. Results show the efficacy and efficiency of our schemes.
 
Article
Sensed data in Wireless Sensor Networks (WSN) reflect the spatial and temporal correlations of physical attributes existing intrinsically in the environment. In this article, we present the Clustered AGgregation (CAG) algorithm that forms clusters of nodes sensing similar values within a given threshold (spatial correlation), and these clusters remain unchanged as long as the sensor values stay within a threshold over time (temporal correlation). With CAG, only one sensor reading per cluster is transmitted whereas with Tiny AGgregation (TAG) all the nodes in the network transmit the sensor readings. Thus, CAG provides energy efficient and approximate aggregation results with small and often negligible and bounded error. In this article we extend our initial work in CAG in five directions: First, we investigate the effectiveness of CAG that exploits the temporal as well as spatial correlations using both the measured and modeled data. Second, we design CAG for two modes of operation (interactive and streaming) to enable CAG to be used in different environments and for different purposes. Interactive mode provides mechanisms for one-shot queries, whereas the streaming mode provides those for continuous queries. Third, we propose a fixed range clustering method, which makes the performance of our system independent of the magnitude of the sensor readings and the network topology. Fourth, using mica2 motes, we perform a large-scale measurement of real environmental data (temperature and light, both indoor and outdoor) and the wireless radio reliability, which were used for both analytical modeling and simulation experiments. Fifth, we model the spatially correlated data using the properties of our real world measurements. Our experimental results show that when we compute the average of sensor readings in the network using the CAG interactive mode with the user-provided error threshold of, 20%, we can save 68.25% of transmissions over TAG with only 2.46% inaccuracy in the result. The streaming mode of CAG can save even more transmissions (up to 70.24% in our experiments) over TAG, when data shows high spatial and temporal correlations. We expect these results to hold in reality, because we used the mica2 radio profile and empirical datasets for our simulation study. CAG is the first system that leverages spatial and temporal correlations to improve energy efficiency of in-network aggregation. This study analytically and empirically validates CAG's effectiveness.
 
Article
Information aggregation is an important operation in wireless sensor networks (WSNs) executed for the purpose of monitoring and reporting environmental data. Due to the performance constraints of sensor nodes the in-network form of the aggregation is especially attractive since it allows saving expensive resources during frequent network queries. Easy accessibility of networks and nodes and almost no physical protection against corruptions raise high security challenges. Protection against attacks aiming to falsify the aggregated result is considered to be of prime importance. In this article we design the first general framework for secure information aggregation in WSNs focusing on scenarios where aggregation is performed by one of its nodes. The framework achieves security against node corruptions and is based solely on the symmetric cryptographic primitives that are more suitable for WSNs in terms of efficiency. We analyze performance of the framework and unlike many previous approaches increase confidence in it by a rigorous proof of security within the specially designed formal security model.
 
Article
We consider two related data gathering problems for wireless sensor networks (WSNs). The MLDA problem is concerned with maximizing the system lifetime T so that we can perform T rounds of data gathering with in-network aggregation, given the initial available energy of the sensors. The M ² EDA problem is concerned with minimizing the maximum energy consumed by any one sensor when performing T rounds of data gathering with in-network aggregation, for a given T . We provide an effective algorithm for finding an everywhere sparse integral solution to the M ² EDA problem which is within a factor of α = 1+ 4 n / T of the optimum, where n is the number of nodes. A solution is everywhere sparse if the number of communication links for any subset X of nodes is O ( X ), in our case at most 4| X |. Since often T = ω( n ), we obtain the first everywhere sparse, asymptotically optimal integral solutions to the M ² EDA problem. Everywhere sparse solutions are desirable since then almost all sensors have small number of incident communication links and small overhead for maintaining state. We also show that the MLDA and M ² EDA problems are essentially equivalent, in the sense that we can obtain an optimal fractional solution to an instance of the MLDA problem by scaling an optimal fractional solution to a suitable instance of the M ² EDA problem. As a result, our algorithm is effective at finding everywhere sparse, asymptotically optimal, integral solutions to the MLDA problem, when the initial available energy of the sensors is sufficient for supporting optimal system lifetime which is ω( n ).
 
Topology of 8 × 8 grid networks of same end-to-end hops between leafs and the sink.
Topology of 8 × 8 grid networks of different end-to-end hops between leafs and the sink.
Information collection ratio versus end-to-end delay bounds for the aggregation tree in Figure 4.
Article
We investigate the problem of delay constrained maximal information collection for CSMA-based wireless sensor networks. We study how to allocate the maximal allowable transmission delay at each node, such that the amount of information collected at the sink is maximized and the total delay for the data aggregation is within the given bound. We formulate the problem by using dynamic programming and propose an optimal algorithm for the optimal assignment of transmission attempts. Based on the analysis of the optimal solution, we propose a distributed greedy algorithm. It is shown to have a similar performance as the optimal one.
 
Advantage of using correlation information (b) instead of transmitting raw data (a)  
Upper bounds for time and message transmission when a node is added 
Estimating the cardinality of the maximum maximal independent set generated by DOSA  
Timing diagram for addition of a new node, n (Node v is adjacent to n and node w is 2 hops from n)
Article
Wireless sensor networks (WSNs) are increasingly being used to monitor various parameters in a wide range of environmental monitoring applications. In many instances, environmental scientists are interested in collecting raw data using long-running queries injected into a WSN for analyzing at a later stage, rather than injecting snap-shot queries containing data-reducing operators (e.g., MIN, MAX, AVG) that aggregate data. Collection of raw data poses a challenge to WSNs as very large amounts of data need to be transported through the network. This not only leads to high levels of energy consumption and thus diminished network lifetime but also results in poor data quality as much of the data may be lost due to the limited bandwidth of present-day sensor nodes. We alleviate this problem by allowing certain nodes in the network to aggregate data by taking advantage of spatial and temporal correlations of various physical parameters and thus eliminating the transmission of redundant data. In this article we present a distributed scheduling algorithm that decides when a particular node should perform this novel type of aggregation. The scheduling algorithm autonomously reassigns schedules when changes in network topology, due to failing or newly added nodes, are detected. Such changes in topology are detected using cross-layer information from the underlying MAC layer. We first present the theoretical performance bounds of our algorithm. We then present simulation results, which indicate a reduction in message transmissions of up to 85&percnt; and an increase in network lifetime of up to 92&percnt; when compared to collecting raw data. Our algorithm is also capable of completely eliminating dropped messages caused by buffer overflow.
 
Synopsis diffusion over the Rings topology. Crossed arrows and circles represent failed links and nodes.
The impact of sensor density on accuracy.
Graph used in the proof.
Article
Previous approaches for computing duplicate-sensitive aggregates in wireless sensor networks have used a tree topology, in order to conserve energy and to avoid double-counting sensor readings. However, a tree topology is not robust against node and communication failures, which are common in sensor networks. In this article, we present synopsis diffusion, a general framework for achieving significantly more accurate and reliable answers by combining energy-efficient multipath routing schemes with techniques that avoid double-counting. Synopsis diffusion avoids double-counting through the use of order- and duplicate-insensitive (ODI) synopses that compactly summarize intermediate results during in-network aggregation. We provide a surprisingly simple test that makes it easy to check the correctness of an ODI synopsis. We show that the properties of ODI synopses and synopsis diffusion create implicit acknowledgments of packet delivery. Such acknowledgments enable energy-efficient adaptation of message routes to dynamic message loss conditions, even in the presence of asymmetric links. Finally, we illustrate using extensive simulations the significant robustness, accuracy, and energy-efficiency improvements of synopsis diffusion over previous approaches.
 
Top-cited authors
Sushil Jajodia
  • George Mason University
Sanjeev Setia
  • George Mason University
Tian He
  • Beihang University (BUAA)
Gang Zhou
  • College of William and Mary
John Stankovic
  • University of Viginia