[Show abstract][Hide abstract] ABSTRACT: Rate control for congestion mitigation and avoidance has received significant attention in the sensor networks literature. Existing rate control schemes dynamically assign rates in a distributed manner. In this paper, we take a step back and ask: is a near-optimal quasi-static centralized rate allocation even feasible for wireless sensor networks? Intuition would suggest otherwise, since wireless conditions vary dynamically, and optimal centralized rate allocation is known to be computationally intractable. Surprisingly, however, we find that, quasi-static centralized rate allocation performs well at time-scales of tens of minutes on a 40-node testbed. Our approach relies on a relatively simple, lightweight rate allocation heuristic that uses topology and loss rate information, and adapts at relatively long time-scales to channel variability. Extensive experiments on a 40-node wireless testbed show that sensor nodes achieve a goodput very close to their allocated rate, even in harsh wireless conditions. Furthermore, this achieved goodput is nearly 50% higher than that achieved by IFRC, a recently-proposed distributed rate control scheme, and within 13% of an empirically-determined optimal rate. We also evaluate extensions to our heuristic to support weighted fairness and networks with multiple base stations.
Sensor, Mesh and Ad Hoc Communications and Networks, 2007. SECON '07. 4th Annual IEEE Communications Society Conference on; 07/2007
[Show abstract][Hide abstract] ABSTRACT: Until practical ad-hoc localisation systems are developed, early deployments of wireless sensor networks will manually configure location information in network nodes in order to assign spatial context to sensor readings. In this paper, we argue that such deployments will use hierarchical location names (for example, a node in a habitat monitoring network might be said to be node number N in cluster C of region R), rather than positions in a two- or three-dimensional coordinate system. We show that these hierarchical location names can be used to design a scalable routing system called HLR. HLR provides a variety of primitives including unicast, scoped anycast and broadcast, as well as various forms of scalable rendezvous. These primitives can be used to implement most of the data-centric routing and storage schemes proposed in the literature; these schemes currently need precise position information and geographic routing in order to scale well. We evaluate HLR using simulations as well as an implementation on the Mica-2 motes.
International Journal of Ad Hoc and Ubiquitous Computing 01/2006; 1(4):179-193. DOI:10.1504/IJAHUC.2006.010499 · 0.55 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Detecting coordinated attacks on Internet resources requires a distributed network monitoring infrastructure. Such an in- frastructure will have two logically distinct elements: distributed monitors that continuously collect traffic information, and a dis- tributed query system that allows network operators to efficiently correlate information from different monitors in order to detect anomalous traffic patterns. In this paper, we discuss the design and implementation of MIND, a distributed index management system that supports the creation and querying of multiple distributed indices. We validate MIND using traffic traces from two large backbone networks, then examine the performance of a MIND prototype on more than 100 PlanetLab machines. Our experiments show that MIND can detect and report network anomalies in about one second on an inter-continental backbone. We also analyze the efficiency of our load balancing mechanism and evaluate the robustness of MIND to node failure.
INFOCOM 2006. 25th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 23-29 April 2006, Barcelona, Catalunya, Spain; 01/2006
[Show abstract][Hide abstract] ABSTRACT: Network anomaly detection using dimensionality reduction techniques has received much recent attention in the liter- ature. For example, previous work has aggregated netflow records into origin-destination (OD) flows, yielding a much smaller set of dimensions which can then be mined to un- cover anomalies. However, this approach can only identify which OD flow is anomalous, not the particular IP flow(s) responsible for the anomaly. In this paper we show how one can use random aggregations of IP flows (i.e., sketches) to enable more precise identification of the underlying causes of anomalies. We show how to combine traffic sketches with a subspace method to (1) detect anomalies with high accu- racy and (2) identify the IP flows(s) that are responsible for the anomaly. Our method has detection rates comparable to previous methods and detects many more anomalies than prior work, taking us a step closer towards a robust on-line system for anomaly detection and identification.
Proceedings of the 6th ACM SIGCOMM Conference on Internet Measurement 2006, Rio de Janeriro, Brazil, October 25-27, 2006; 01/2006
[Show abstract][Hide abstract] ABSTRACT: Sensor networks consist of many small sensing devices that monitor an environment and communicate using wireless links. The lifetime of these networks is severely curtailed by the limited battery power of the sensors. One line of research in sensor network lifetime management has examined sensor selection techniques, in which applications judiciously choose which sensors' data should be retrieved and are worth the expended energy. In the past, many ad-hoc approaches for sensor selection have been proposed. In this paper, we argue that sensor selection should be based upon a tradeoff between application-perceived benefit and energy consumption of the selected sensor set. We propose a framework wherein the application can specify the utility of measuring data (nearly) concurrently at each set of sensors. The goal is then to select a sequence of sets to measure whose total utility is maximized, while not exceeding the available energy. Alternatively, we may look for the most cost-effective sensor set, maximizing the product of utility and system lifetime. This approach is very generic, and permits us to model many applications of sensor networks. We proceed to study two important classes of utility functions: submodular and supermodular functions. We show that the optimum solution for submodular functions can be found in polynomial time, while optimizing the cost-effectiveness of supermodular functions is NP-hard. For a practically important subclass of supermodular functions, we present an LP-based solution if nodes can send for different amounts of time, and show that we can achieve an O(log n) approximation ratio if each node has to send for the same amount of time. Finally, we study scenarios in which the quality of measurements is naturally expressed in terms of distances from targets. We show that the utility-based approach is analogous to a penalty-based approach in those scenarios, and present preliminary results on some practically important special cases
Proceedings of the Fifth International Conference on Information Processing in Sensor Networks, IPSN 2006, Nashville, Tennessee, USA, April 19-21, 2006; 01/2006
[Show abstract][Hide abstract] ABSTRACT: Detecting and unraveling incipient coordinated attacks on Internet resources requires a distributed network monitoring infrastructure. Such an infrastructure will have two logically distinct elements: distributed monitors that continuously collect packet and flow-level information, and a distributed query system that allows network operators to effi- ciently and rapidly access this information. We argue that, in addition to supporting other types of queries, the network monitoring query system must support multi-dimensional range queries on traffic records (flows, or aggregated flow records). We discuss the design of MIND, a distributed indexing system which supports the creation of multiple distributed indices that use proximal hashing to scalably respond to range queries.
Data Engineering Workshops, 2005. 21st International Conference on; 05/2005
[Show abstract][Hide abstract] ABSTRACT: this paper, we develop lower bounds and an algorithm for minimizing energy cost for broadcasting from any source to all other nodes in the network. Most prior work has used a simpler model for energy cost for wireless communications by accounting only for the analog radiation cost for transmission and ignoring the fixed energy cost for electronics in transmission and reception circuitry in nodes
[Show abstract][Hide abstract] ABSTRACT: Energy-efficient communication is critical for increasing life of power limited wireless ad hoc networks. There has been considerable interest in minimum energy broadcast operations. In this paper, we develop bounds and an algorithm for minimizing energy cost for broadcasting from any source to all other nodes in the network. Most prior work have used simpler model for energy cost for wireless communications by accounting only the analog radiation cost for transmission and ignored the fixed cost for electronics in transmission and reception circuitry in nodes. Furthermore, in a network it is possible for some node pairs not be able to communicate directly even though they are in their radio ranges due to obstacles present in the terrain of the network.
Journal of Interconnection Networks 09/2002; 3(03n04):149-166. DOI:10.1142/S0219265902000604
[Show abstract][Hide abstract] ABSTRACT: We describe a software-engineering strategy called the 'View-Primitive Data Model framework' (or 'VPDMf') derived from the design of leading commercial software engineering tools. We describe a prototypical implementation of the strategy and its use within neuroinformatics. We present the argument that the only way to fulfill on demands for reliable, easy-to-use software by non-computational communities of neuroscientists is for developers within neuroinformatics to adopt and contribute to approaches such the VPDMf under the open-source paradigm. We present the VPDMf as one such development opportunity.
[Show abstract][Hide abstract] ABSTRACT: Future large-scale sensor network deployments will be tiered, with the motes providing dense sensing and a higher tier of 32-bit master nodes with more powerful radios providing increased overall network capacity. In this paper, we describe a functional architecture for wire- less sensor networks that leverages this structure to sim- plify the overall system. Our Tenet architecture has the nice property that the mote-layer software is generic and reusable, and all application functionality resides in mas- ters.
[Show abstract][Hide abstract] ABSTRACT: Sensor networks are an emerging class of systems with sig-nificant potential. Recent work  has proposed a dis-tributed data structure called DIM for efficient support of multi-dimensional range queries in sensor networks. The original DIM design works well with uniform data dis-tributions. However, real world data distributions are of-ten skewed. Skewed data distributions can result in stor-age and traffic hotspots in the original DIM design. In this paper, we present a novel distributed algorithm that alle-viates hotspots in DIM caused by skewed data distribu-tions. Our technique adjusts DIM's locality-preserving hash functions as the overall data distribution changes signifi-cantly, a feature that is crucial to a distributed data structure like DIM. We describe a distributed algorithm for adjust-ing DIM's locality-preserving hash functions that trade off some locality for a more even data distribution, and so a more even energy consumption, among nodes. We show, us-ing extensive simulations, that hotspots can be reduced by a factor of 4 or more with our scheme, with little overhead in-curred for data migration and no penalty placed on over-all energy consumption and average query costs. Finally, we show preliminary results based on a real implementa-tion of our mechanism on the Berkeley motes.