Computer Communications

Published by Elsevier BV

Online ISSN: 0140-3664

Articles


Fig. 1. Basic cluster head strategies for sensor networks with a single static sink.  
Fig. 2. Basic approaches for data collection with random sink mobility.  
Fig. 3. Basic approaches for tracking the position of a sink with random mobility.  
Fig. 4. Basic approaches for data collection with fixed sink mobility.  
Fig. 5. Basic approaches for controlled sink mobility.

+7

Static vs. Mobile sink: The influence of basic parameters on energy efficiency in wireless sensor networks
  • Article
  • Full-text available

May 2013

·

419 Reads

·

·

Guenter Haring
Over the last decade a large number of routing protocols has been designed for achieving energy efficiency in data collecting wireless sensor networks. The drawbacks of using a static sink are well known. It has been argued in the literature that a mobile sink may improve the energy dissipation compared to a static one. Some authors focus on minimizing Emax, the maximum energy dissipation of any single node in the network, while others aim at minimizing Ebar, the average energy dissipation over all nodes. In our paper we take a more holistic view, considering both Emax and Ebar.
Download
Share

A stochastic model for beaconless IEEE 802.15.4 MAC operation

August 2009

·

174 Reads

·

·

H. Hosseini

·

[...]

·

Y. Bashir
IEEE 802.15.4 is a popular choice for MAC/PHY protocols in low power and low data rate wireless sensor networks. In this paper, we develop a stochastic model for the beaconless operation of IEEE 802.15.4 MAC protocol. Given the number of nodes competing for channel access and their packet generation rates, the model predicts the packet loss probability and the packet latency. We compared the model predictions with NS2 simulation results and found an excellent match between the two for a wide range of the packet generation rates and the number of competing nodes in the network.

Abstract Configurable software-based edge router architecture

September 2004

·

92 Reads

This paper explores and proposes the use of open programmable router technologies to achieve dynamic configuration, adaptation and management of network entities. The objective is to provide highly dynamic and flexible edge router technology to support the interconnection between emerging short range wireless technologies such as WPANs and IP based networks. To improve current software based edge router designs, the paper merges the Click framework (an open source software architecture for the forwarding plane) and the Forwarding and Control Element Separation (ForCES) principle. The proposed architecture is implemented on a real testbed based on standard PCs and open source operating systems. Results on achievable performance using these software based solutions and Click are reported. The impact on traffic flows and applications in terms of packet losses and delays is evaluated. q 2005 Elsevier B.V. All rights reserved.

On-the-fly TCP path selection algorithm in access link load balancing

January 2004

·

65 Reads

Many enterprises install multiple access links for fault tolerance or bandwidth enlargement. Dispatching connections through good links is the ultimate goal in utilizing multiple access links. The traditional dispatching method is only based on the condition of the access links to ISPs. It may achieve fair utilization of the access links but poor performance on connection throughput. In this work, we propose a new approach to maximize the per-connection end-to-end throughput by the on-the-fly round trip time (RTT) probing mechanism. The RTTs through all possible links are probed by duplicating the SYN packet during the three way handshaking stage of a TCP connection. Combined with the statistical packet loss ratio and the passively collected link metrics, our algorithm can real-time select a link which provides the maximum throughput for the TCP connection. The experiment results show that the accuracy of choosing the best outgoing access link is over 71%. If the second best link is chosen, it is usually very close to the best, thus achieving over 89% of the maximum possible throughput. The average per-connection throughput for our algorithm and the traditional round-robin algorithm is 94% vs. 69%.

Improving delivery ratios for application layer multicast in mobile ad-hoc networks

September 2004

·

37 Reads

Delivering multicast data using application layer approaches offers different advantages, as group members communicate using so-called overlay networks. These consist of a multicast group's members connected by unicast tunnels. Since existing approaches for application layer delivery of multicast data in mobile ad hoc networks (short MANETs) only deal with routing but not with error recovery, this paper evaluates tailored mechanisms for handling packet losses and congested networks. Although illustrated at the example of a specific protocol, the mechanisms may be applied to arbitrary overlays. This paper also investigates how application layer functionality based on overlay networks can turn existing multicast routing protocols (like ODMRP, M-AODV,…) into (almost) reliable transport protocols.

Token-tray/weighted queuing-time (TT/WQT): An adaptive batching policy for near video-on-demand system

February 2001

·

26 Reads

In near video-on-demand (near-VoD), requests for a video title are grouped together (i.e. batched) and are served with a single multicast stream, thereby increasing the number of concurrent users which can be supported by the system. Since users may not be able to tolerate the delay incurred by batching and hence cancel their requests, a batching policy should be designed so as to achieve low user loss and high revenue (given by the total pay-per-view collected over a long period of time across all movies). We propose an adaptive batching policy which offers users low delay at low arrival rate, and gates the allocation of the channels at high rate. Such adaptivity is achieved by the use of a simple “token-tray” (TT) scheme which governs when a stream may be allocated to a movie. In assigning a movie to a stream, we propose a weight function which depends on the user queuing-time and its pay-per-view (hence the term “weighted queuing-time”) (WQT). By comparing our batching policy (TT/WQT) with a number of traditional ones (FCFS, forced-wait, batch-size-based scheme, etc.), our scheme is shown to achieve the highest revenue and lowest loss rate even when the arrival rate changes, with the user loss rate across the movies being fairly uniform, and the user delay being fairly low even at high arrival rate

A measurement-based admission control algorithm using variable-sized window in ATM networks

October 1997

·

5 Reads

The decisions of connection admission in ATM networks should be made in real time through the use of a fast algorithm. Since it is difficult to construct an accurate model for the multiplexed traffic, approximation of the multiplexed load is necessary. We focus on a dynamic CAC (connection admission control) algorithm as a different approach, in which admission control decisions are made based on network measurements. The algorithm observes the traffic through a moving window and the window size is recomputed from the measured cell loss. This approach also makes it possible to reallocate network resources (bandwidth and buffers) for multiple traffic classes. The performance of the proposed method is analyzed by means of simulated tests

An admission control scheme for the real-time VBR traffic in the ATM network: deterministic bandwidth allocation

July 1998

·

17 Reads

We present an admission control scheme for real-time VBR traffic. The deterministic guarantee on the end-to-end delay bound is adopted as the measure of quality-of-service (QoS). The network environment of our admission control algorithm is the connection-oriented network, in which every network node uses the rate-controlled service (RCS) discipline based on the earliest deadline first (EDF) scheduling policy. Unlike the previous studies, this paper focuses on designing an efficient bandwidth allocation scheme, in which the service curve and the spare capacity curve are two important terminologies employed. Experiments using the parameters derived from some real video traces show that our algorithm performs better than previous ones in terms of both the network bandwidth utilization and the computational time needed

Near-optimal data allocation over multiple broadcast channels

December 2004

·

18 Reads

Data broadcast has become a promising solution for information dissemination in the wireless environment. In this system, the average expected delay (aed) depends on the broadcast schedule because different data items may have different popularities and there are multiple broadcast channels for current delivery. In this paper, a restricted dynamic programming approach to minimize the aed is proposed to partition data items over multiple channels near optimally. To reduce costs in dynamic programming, for each partition, we predict a possible location which may be very close to the optimal cut by using a low bound of aed for given items among given channels. Thus, the search space in dynamic programming can be restricted to the interval around the found cut point and only takes O(NlogK) time. Moreover, a valley approximation algorithm is performed to improve the aed. Simulation results show that, the hit rate obtained by our algorithm is higher than 95% and it also outperforms the existing algorithm 200%.

BlueBot: Asset tracking via robotic location crawling

August 2005

·

185 Reads

From manufacturers, distributors, and retailers of consumer goods to government departments, enterprises of all kinds are gearing up to use RFID technology to increase the visibility of goods and assets within their supply chain and on their premises. However, RFID technology alone lacks the capability to track the location of items once they are moved within a facility. We present a prototype automatic location sensing system that combines RFID technology and off-the-shelf Wi-Fi based continuous positioning technology for tracking RFID-tagged assets. Our prototype system employs a robot, with an attached RFID reader, which periodically surveys the space, associating items it detects with its own location determined with the Wi-Fi positioning system. We propose four algorithms that combine the detected tag's reading with previous samples to refine its location. Our experiments have shown that our positioning algorithms can bring a two to three fold improvement over the positional accuracy limitations in both the RFID reader and the positioning technology.

An OVSF Code assignment scheme utilizing multiple Rake combiners for W-CDMA

June 2003

·

51 Reads

Orthogonal variable spreading factor (OVSF) codes have been proposed as the channelization codes used in the wideband CDMA access technology of IMT-2000. OVSF codes have the advantage of supporting variable bit rate services, which is important to emerging multimedia applications. The objective of OVSF code assignment algorithm is to minimize the probability of code request denial due to inappropriate resource allocation. In this paper, we propose an efficient OVSF code assignment scheme that utilizes multiple Rake combiners in user equipments. Our approach finds in constant time all feasible codewords for any particular request, trying to minimize both rate wastage and code fragments. The simulation result shows that our scheme outperforms previous work in the probability of request denial. The code management overhead is also minimal in our scheme.

Optimal channel assignment in wireless communication networks with distance and frequency interferences

October 2004

·

33 Reads

Fixed channel assignment in wireless communication networks is an important combinatorial optimization problem that must be solved for application problems. Since it is NP-hard, many different heuristics are proposed for its solution. We consider two types of interference conditions for channel assignment: a co-channel interference within distance of two cells and a adjacent channel interference within the same and the adjacent cells. Our goal is to minimize or disallow these two types of interference in order to achieve optimal channel assignment. First we present our recursive search algorithm together with the neighborhood improvement structure. Then we minimize or disallow these two types of interference in order to achieve optimal channel assignment. We suggest a general approach combining several important heuristics. Our experimental results show that our algorithm improves over known approaches.

Source assisted partial destination slot release in slotted networks

July 1994

·

7 Reads

Presents a new scheme that enables destination nodes in a slotted shared medium network to release some received slots, hence making them available for reuse by other downstream nodes in the network. This scheme ensures a significant increase in the effective capacity of the network without any noticeable degradation in node processing delays, or increase of node complexity

IPv6 over ATM flow-handling

July 1998

·

17 Reads

This paper introduces a new feature of IPv6 called an extension header. An extension header is located between the IPV6 header and the TCP header and offers new possibilities for IP over ATM signalling and high capacity allocation in backbone networks. In IPV6 it is possible to insert an arbitrary number of extension headers between the Internet header and the payload

Attacks on ID-based signature scheme based upon Rabin's public key cryptosystem

November 1993

·

14 Reads

Two attacks are given to show that the identity-based signature scheme proposed by C. C. Chang and C. H. Lin (1991) based upon Rabin's public key cryptosystem is not secure enough. One of the attacks is based on the conspiracy of two users in the system while the other can be performed by anyone alone. It is shown that, in this second attack, the scheme can be broken by anyone (not necessarily a user in the system) who has the ability to observe the communications between the signer and the receiver

Dimensioning of play-out buffers for real time services in a B-ISDN

March 1998

·

6 Reads

A large fraction of traffic that the future B-ISDN will probably transport is made up of real-time services with stringent delay and delay jitter requirements. Since ATM networks do not provide time transparent links, a delay equalisation has to be provided in the adaptation layer or in user equipment. In this paper we first propose an analytical model that allows the evaluation of the end-to-end delay and of the relevant jitter (the delay that we consider is the one suffered in the network queueing systems plus the time needed to transmit a cell but not the propagation delay; the latter can be trivially added to the former); then we focus on the dimensioning of play-out buffers. The proposed model is validated with simulation results and we found a good agreement between analytical and simulation results. To make the study analytically tractable we use rather simple traffic source models and we make suitable simplifying assumptions. These assumptions are fairly general but the source models are still far from representing accurately certain kinds of real traffic. For this reason we carry out also an additional simulation study (supported by some analytical arguments), by using real experimental MPEG, LAN and Internet traffic traces, to assess the system performance in real scenarios

On the Performance of Shared-Channel Multihop Lightwave Networks
The authors study the effect of channel sharing on the performance of multihop lightwave networks when channel sharing is achieved by using a time-division multiple-access (TDMA) technique. They present a result which gives an upper bound for the throughput that can be achieved with any virtual topology that can be established with N stations assuming that the traffic distribution is uniform and that all virtual links have the same capacity. Using this result they determine the optimal degree of channel sharing that maximizes throughput. They also determine the optimal degree of channel sharing when the criterion is to maximize the network power, which is defined as a ratio of throughput and delay

Collaborative Data Gathering in Wireless Sensor Networks Using Measurement Co-Occurrence

November 2007

·

38 Reads

Data gathering is a basic activity of many wireless sen- sor network applications. We propose a novel collaborative data gathering approach utilizing data co­occurrence, which is different from data correlation. Our approach of- fers a trade-off between communication costs of data gathering versus errors at estimating the sensor measurements at the base station, by having sensors with co­occurring measurements alternate in transmitting such measurements to the base station, and having the base station make infer- ences about sensor measurements utilizing only the trans- mitted data. We describe two effective methods for in- network detecting measurements co-occurrence among sen- sors, an efficient protocol for scheduling the transmission of measurements, and a simple algorithm for measurement inference. Our simulation results on synthetic and real datasets show a substantial (up to 65%) reduction on the communication costs of data gathering with few number of inference errors at the base station.

Low-Complexity Detection by Exploiting Suboptimal Detection Order and Subcarrier Grouping for Multi-Layer MIMO-OFDM

June 2008

·

18 Reads

By combining the transmit diversity and space multiplexing, the multi-layer multiple-input multiple-output orthogonal frequency division multiplexing (multi-layer MIMO-OFDM) system can exploit the potentials of both techniques. The transmit antennas are divided into several layers and each layer is encoded by a certain multi-antenna coding. The antennas transmit OFDM signals in order to deal with frequency-selective fading. In addition to exhaustive detection which applies the same detection at each subcarrier independently, we exploit the subcarrier correlation to develop a subcarrier-grouping based low-complexity detection scheme. The subcarriers bound into the same group share the same layer detection order which is obtained by the post-nulling signal power comparison at the center subcarrier of that group. The simulation results show that compared with exhaustive detection, the proposed low-complexity detection scheme is capable of reducing 50.7% complexity with only 0.8dB performance degradation for 6x6 system, and reducing 60.7% complexity with about 1.0dB performance degradation for 8x8 system, respectively.

An efficient traffic engineering approach based on flow distribution and splitting in MPLS networks

December 2004

·

54 Reads

This paper develops an efficient method based on traffic flow distribution and splitting for traffic engineering in the MPLS networks. We define flow distribution as selecting one of the available label switch paths (LSPs) to carry one aggregated traffic flow. Flow splitting is, however, the mechanism designed for multiple parallel LSPs to share one single aggregated flow. Our studies show that flow distribution and flow splitting approaches readily solve the routing problems such as bottleneck and mismatch problems. An algorithm based on network bandwidth utilization is also proposed to integrate both approaches. The simulation results at the end of the paper demonstrate the effectiveness of the proposed approaches.

PoX: Protecting users from malicious Facebook applications

April 2011

·

31 Reads

Online social networks such as Facebook, MySpace, and Orkut store large amounts of sensitive user data. While a user can legitimately assume that a social network provider adheres to strict privacy standards, we argue that it is unwise to trust third-party applications on these platforms in the same way. Although the social network provider would be in the best position to implement fine-grained access control for third party applications directly into the platform, existing mechanisms are not convincing. Therefore, we introduce PoX, an extension for Facebook that makes all requests for private data explicit to the user and allows her to exert fine-grained access control over what profile data can be accessed by individual applications. By leveraging a client-side proxy that executes in the user's web browser, data requests can be relayed to Facebook without forcing the user to trust additional third parties. Of course, the presented system is backwards compatible and transparently falls back to the original behavior if a client does not support our system. Thus, we consider PoX to be a readily available alternative for privacy-aware users that do not want to wait for privacy-relevant improvements to be implemented by Facebook itself.

A method to improve the robustness of MPEG video applications overwireless networks

February 2000

·

17 Reads

It is important that applications deployed in wireless networks be robust enough to accommodate different kinds of data losses due to link errors, connection re-routing, and network congestion. A scheme to improve the robustness of MPEG based video applications is proposed. Under this scheme, the packet size at the sender is optimized to the MPEG system layer unit size to enable fast recovery of the time stamps. The receiver buffers the data packets for re-playing during packet losses, preserves useful time stamps for synchronisation, and re-synchronises the decoder when packet loss occurs. This involves no feedback and results in faster recovery of critical timing information for video decoding. The proposed method has been implemented on a video application based on MPEG-I over an ATM network. The experiments show that the proposed technique is very effective when there is deterministic/random packet losses with burst size less than 21. The method is equally applicable to video applications based on MPEG-II program stream and, to wireless Internet

Self-optimizing window flow control in high speed data networks

January 1993

·

13 Reads

The critical parameter in a window flow control scheme is the window size, which represents the maximum number of packets that can be in transit at a time. In this paper, we consider the problem of selecting the optimum window size in high-speed data networks. A self-optimizing method is proposed to adapt the window size to network conditions. The scheme employs a cross-correlation technique for process identification where the perturbation signal is random. Simulation results are presented for the isarithmic flow control to show the convergence rate and stability characteristics of the self-optimizing scheme.

The performance comparison between TCP Reno and TCP Vegas

November 2000

·

291 Reads

Following the development of TCP Vegas, the performance comparison between TCP Reno and TCP Vegas has not been discussed thoroughly. The discussion of the revised TCP version, TCP Vegas, remains insufficient to decide whether or not to use it. This paper attempts to compare the performance, throughput and fairness, of Reno and Vegas in the network environments with homogeneous and heterogeneous versions of TCP. Results in this study indicate that while TCP Vegas obtains better throughput and fairness in homogeneous cases, it fails to outperform Reno in heterogeneous cases. This phenomenon prevents users from adopting TCP Vegas despite its better performance

The need for network management

March 1991

·

24 Reads

As organizations have become increasingly dependent on their networked computing environments, the importance of effective network management has become a key element in the success of those networks. A flurry of activity in the network management field has resulted in a number of different standards efforts taking place concurrently, and different manufactures adopting different standards for current products. The standards activities of most importance to one company, 3Com UK Ltd, are outlined, and the possibilities for future product development are discussed, particularly with reference to 3Com/IBM's recent Heterogeneous LAN Management (HLM) specification announcement.

Top-cited authors