Computer Networks

Published by Elsevier BV

Print ISSN: 1389-1286

Articles


Figure 1. Node-Disjoin paths. 
Figure 4. Pseudo code for the destination node in IZM-DSR. 
Figure 5. An Example for proposed algorithm.. Figure6 shows the state of this network when all of RREQs receive to destination. As shown in figure 6 since node 1 receives two RREQ from node 2 and 3 the counter field in its RREQ_Seen table sets to two. 
Figure 6. Destination receive RREQs in IZM-DSR. 
Figure 7. Source node receive RREPs in IZM-DSR. 

+1

IZM-DSR: A New Zone-Disjoint Multi-path Routing Algorithm for Mobile Ad-Hoc Networks
  • Article
  • Full-text available

December 2009

·

353 Reads

·

·

·

Some of multi-path routing algorithms in MANETs use multiple paths simultaneously. These algorithms can attempt to find node-disjoint to achieve higher fault tolerance. By using node-disjoint paths, it is expected that the end-to-end delay in each case should be independent of each other. However, because of natural properties and medium access mechanisms in ad hoc networks, such as CSMA/CA, the end-to-end delay between any source and destination depends on the pattern of communication in the neighborhood region. In this case some of the intermediate nodes should be silent to reverence their neighbors and this matter increases the end-to-end delay. To avoid this problem, multi-path routing algorithms can use zone-disjoint paths instead of node-disjoint paths. In this paper we propose a new multi-path routing algorithm that selects zone-disjoint paths, using omni-directional antenna. We evaluate our algorithm in several different scenarios. The simulation results show that our approach is very effective in decreasing routing overhead and end-to-end delay.
Download
Share

Distributed Routing, Relay Selection, and Spectrum Allocation in Cognitive and Cooperative Ad Hoc Networks

July 2010

·

115 Reads

Cooperative relaying and dynamic-spectrum-access/cognitive techniques are promising solutions to increase the capacity and reliability of wireless links by exploiting the spatial and frequency diversity of the wireless channel. Yet, the combined use of cooperative relaying and dynamic spectrum access in multi-hop networks with decentralized control is far from being well understood. We study the problem of network throughput maximization in cognitive and cooperative ad hoc networks through joint optimization of routing, relay assignment and spectrum allocation. We derive a decentralized algorithm that solves the power and spectrum allocation problem for two common cooperative transmission schemes, decode-and-forward (DF) and amplify-and-forward (AF), based on convex optimization and arithmetic–geometric mean approximation techniques. We then propose and design a practical medium access control protocol in which the probability of accessing the channel for a given node depends on a local utility function determined as the solution of the joint routing, relay selection, and dynamic spectrum allocation problem. Therefore, the algorithm aims at maximizing the network throughput through local control actions and with localized information only. Through discrete-event network simulations, we finally demonstrate that the protocol provides significant throughput gains with respect to baseline solutions.

Securing distributed adaptation

February 2001

·

33 Reads

Open architecture networks provide applications with fine-grained control over network elements. With this control comes the risk of misuse and new challenges to security beyond those present in conventional networks. One particular security requirement is the ability of applications to protect the secrecy and integrity of transmitted data while still allowing trusted active elements within the network to operate on that data. This paper describes mechanisms for identifying trusted nodes within a network and securely deploying adaptation instructions to those nodes while protecting application data from unauthorized access and modification. Promising experimental results of our implementation within the conductor adaptation framework are also presented, suggesting that such features can be incorporated into real networks

AMRST: adaptive multicast routing protocol for satellite-terrestrial networks

February 2001

·

94 Reads

Multicast provides an efficient way of distributing multimedia information to a set of destinations simultaneously with the highest possible data rate. To enhance the multicasting performance, an adaptive multicast routing (AMR) approach for the satellite-terrestrial network (a hybrid network interconnected by VSAT systems) is proposed. It can dynamically adjust the routing path to obtain a minimal routing cost through a re-routing operation and support dynamic membership, joining and leaving. The simulation results demonstrate that the AMR has better performance and lower routing costs than any other Internet multicast algorithm

Smooth and adaptive forward erasure correcting
This paper presents schemes for transmitting (high level) messages by transferring cells from a sender to a receiver and back. Forward erasure correcting codes are used to cope with cell losses. Smooth transmission ensures that the delay of all transmitted cells is the same. Adaptive forward erasure correcting protocols change the number of redundant cells, used to cope with cell losses, automatically on the fly (as a respond to the performance of the connecting link/path). We present schemes that ensure smooth transmission when no losses occur and adaptive erasure correction when cells are frequently lost

Proactive caching of DNS records: addressing a performancebottleneck

February 2001

·

141 Reads

The resolution of a host name to an IP-address is a necessary predecessor to connection establishment and HTTP exchanges. Nonetheless, DNS resolutions often involve multiple remote name-servers and prolong Web response times. To alleviate this problem name servers and Web browsers cache query results. Name servers currently incorporate passive cache management where records are brought into the cache only as a result of clients' requests and are used for the TTL duration (a TTL value is provided with each record). We propose and evaluate different enhancements to passive caching that reduce the fraction of HTTP connection establishments that are delayed by long DNS resolutions. Renewal policies refresh selected expired cached entries by issuing unsolicited queries. Trace-based simulations using Web proxy logs demonstrated that a significant fraction of cache misses can be eliminated with a moderate overhead. Simultaneous-validation (SV) transparently uses expired records. A DNS query is issued if the respective cached entry is no longer fresh, but concurrently, the expired entry is used to connect to the Web server and fetch the requested content. The content is served only if the expired records used turn out to be in agreement with the query response

Optimal admission control algorithms for scheduling burst data in CDMA multimedia systems

December 2001

·

20 Reads

3rd generation mobile systems are mostly based on the wideband CDMA platform to support high bit rate packet data services. One important component to offer packet data service in CDMA is a burst admission control algorithm. In this paper, we propose and study a novel jointly adaptive burst admission algorithm, namely the jointly adaptive burst admission-spatial dimension algorithm (JABA-SD) to effectively allocate valuable resources in wideband CDMA systems to burst requests. In the physical layer, we have a variable rate channel-adaptive modulation and coding system which offers variable throughput depending on the instantaneous channel condition. In the MAC layer, we have an optimal multiple-burst admission algorithm. We demonstrate that synergy could be attained by interactions between the adaptive physical layer and the burst admission layer. We formulate the problem as an integer programming problem and derive an optimal scheduling policy for the jointly adaptive design. Both the forward link and the reverse link burst requests are considered and the system is evaluated by dynamic simulations which takes into account of the user mobility, power control and soft-handoff. We found that significant performance improvement, in terms of average packet delay, data user capacity and coverage, could be achieved by our scheme compared to the existing burst assignment algorithms.

The priority token bank: Integrated scheduling and admission control for an integrated-services network

June 1993

·

19 Reads

The author proposes a mechanism called the priority token bank for admission control, scheduling, and policing in integrated-services networks, where arrival processes and performance objectives can vary greatly from one packet stream to another. There are two principal components to the priority token bank: accepting or rejecting requests to admit entire packet streams, where acceptance means guaranteeing that the packet stream's performance objectives will be met, and scheduling the transmission of packets such that performance objectives are met, even under heavy loads. To the extent possible, the performance of traffic is also optimized beyond the requirements. The performance achieved with the priority token bank is compared to that of other typical algorithms. It is shown that, when operating under the constraint that the performance objectives of applications such as packet voice, HDTV (high-definition television) and bulk data transfer must be met in an ATM (asynchronous transfer mode) network, the mean delay experienced by other traffic is much better with the priority token bank. Equivalently, to achieve the same performance with other algorithms, it would be necessary to greatly reduce the network load

Delay bounds for a network of guaranteed rate servers with FIFO aggregation

February 2002

·

74 Reads

To support quality of service guarantees in a scalable manner, aggregate scheduling has attracted a lot of attention in the networking community. However, while there are a large number of results available for flow-based scheduling algorithms, few such results are available for aggregate-based scheduling. We study a network implementing guaranteed rate (GR) scheduling with first-in-first-out (FIFO) aggregation. We derive an upper bound on the worst case end-to-end delay for the network. We show that while for a specific network configuration, the derived delay bound is not restricted by the utilization level on the guaranteed rate, it is so for a general network configuration.

Scheduling algorithms for multicast traffic in TDM/WDM networks with arbitrary tuning latencies

February 2001

·

23 Reads

·

·

·

[...]

·

We consider all-optical TDM/WDM broadcast and select networks. We assume that each network node is equipped with one fixed transmitter and one tunable receiver; tuning times are assumed to be not negligible with respect to the slot time. We discuss efficient scheduling algorithms to assign TDM/WDM slots to multicast traffic in such networks. Given the problem complexity, heuristic algorithms based on the Tabu Search methodology are proposed, and their performance is assessed using randomly created request matrices based on two types of multicast traffic patterns: a video-conference, and a server distribution traffic pattern. The considered performance index is the frame length required to schedule a given traffic request matrix

New models and algorithms for programmable networks

February 2001

·

34 Reads

In today's IP networks most of the network control and management tasks are performed at the end points. As a result, many important network functions cannot be optimized due to lack of sufficient support from the network. The growing need for quality guaranteed services brought on suggestions to add more computational power to the network elements. This paper studies the algorithmic power of networks whose routers are capable of performing complex tasks. It presents a new model that captures the hop-by-hop datagram forwarding mechanism deployed in today's IP networks, as well as the ability to perform complex computations in network elements as proposed in the active networks paradigm. Using our framework, we present and analyze distributed algorithms for basic problems that arise in the control and management of IP networks. These problems include: route discovery, message dissemination, topology discovery, and bottleneck detection. Our results prove that, although adding computation power to the routers increases the message delay, it shortens the completion time for many tasks. The suggested model can be used to evaluate the contribution of added features to a router, and allows the formal comparison of different proposed architectures

Optimal replication algorithms for hierarchical mobility management in PCS networks

April 2002

·

11 Reads

In the context of heterogeneous networks and diverse communication devices, real person-to-person communication can be achieved with a universal personal identification (UPI) that uniquely identifies an individual and is independent of the access networks and communication devices. Hierarchical mobility management techniques (MMTs) have been proposed to support UPI. Traditional replication methods for improving the performance of such MMTs are threshold-based. We present optimal replication algorithms that minimize the network messaging cost based on network structure, communication link cost, and user calling and mobility statistics. We develop our algorithms for both unicast and multicast replica update strategies. The performance of these replication algorithms is studied via large scale network simulations and compared to results obtained from the threshold-based method.

Broadcast anti-jamming systems

January 2001

·

78 Reads

In a traditional anti-jamming system a transmitter who wants to send a signal to a single sender spreads the signal power over a wide frequency spectrum with the aim of stopping a jammer from blocking the transmission. In this paper we consider the case that there are multiple receivers and the sender wants to broadcast a message to all receivers such that colluding groups of receivers cannot jam the reception of any other receiver. We propose efficient coding methods that achieve this goal and link this problem to well-known problems in combinatorics. We also link a generalisation of this problem to the key distribution pattern problem studied in combinatorial cryptography.

QoSockets: a New Extension to the Sockets API for End-to-End Application QoS Management.

February 1999

·

17 Reads

Distributed multimedia applications are sensitive to the quality of service (QoS) delivered by underlying communication networks. The main question this work addresses is how to adapt multimedia applications to the QoS delivered by the network and vice versa. We introduce QoSockets, an extension to the sockets mechanism to enable QoS reservation and management. QoSockets automatically generates the instrumentation to monitor QoS. It scrutinizes interactions among applications and transport protocols and collects in QoS management information bases (MIB) statistics on the QoS delivered. The main advantages of QoSockets are the following: (1) support of single API for transport-layer QoS negotiation, connection establishment, and data transmission; and of single API for OS QoS negotiation; (2) support of a single QoS negotiation protocol; (3) generality across application QoS needs; (4) automatic management of application QoS needs. QoSockets are available for Solaris and Linux and support RSVP, ATM adaptation, ST-II, TCP/UDP, and Unix native protocols

VERA: An extensible router architecture

February 2001

·

56 Reads

We recognize two trends in router design: increasing pressure to extend the set of services provided by the router and increasing diversity in the hardware components used to construct the router. The consequence of these two trends is that it is becoming increasingly difficult to map the services onto the underlying hardware. Our response to this situation is to define a virtual router architecture, called VERA, that hides the hardware details from the forwarding functions. This paper presents the details of VERA and reports preliminary experiences implementing various aspects of the architecture

A Coverage-Aware Clustering Protocol for Wireless Sensor Networks

January 2011

·

75 Reads

In energy-limited wireless sensor networks, network clustering and sensor scheduling are two efficient techniques for minimizing node energy consumption and maximizing network coverage lifetime. When integrating the two techniques, the challenges are how to select cluster heads and active nodes. In this paper, we propose a coverage-aware clustering protocol. In the proposed protocol, we define a cost metric that favors those nodes being more energy-redundantly covered as better candidates for cluster heads and select active nodes in a way that tries to emulate the most efficient tessellation for area coverage. Our simulation results show that the network coverage lifetime can be significantly improved compared with an existing protocol.

Dynamic Internet overlay deployment and management using the X-Bone

February 2000

·

72 Reads

The X-Bone dynamically deploys and manages Internet overlays to reduce their configuration effort and increase network component sharing. The X-Bone discovers, configures, and monitors network resources to create overlays over existing IP networks. Overlays are useful for deploying overlapping virtual networks on a shared infrastructure and for simplifying topology. The X-Bone extends current overlay management by adding dynamic resource discovery, deployment, and monitoring and allows simultaneous participation in multiple overlays. Its two-layer IP in IP tunneled overlays support existing applications and unmodified routing, multicast, and DNS services in unmodified operating systems. This two-layer scheme uniquely supports recursive overlays, useful for fault tolerance and dynamic relocation. The X-Bone uses multicast to simplify resource discovery, and provides secure deployment as well as secure overlays. This paper presents the X-Bone architecture, and discusses its components and features, and their performance impact

Schedule Burst Proactively for Optical Burst Switching Networks

April 2004

·

29 Reads

Optical burst switching (OBS) is a promising paradigm for the next-generation Internet infrastructure. In OBS, a key problem is to schedule bursts on wavelength channels with both fast and bandwidth efficient algorithms so as to reduce burst loss. To date, most scheduling algorithms avoid burst contention locally (or reactively). In this paper, we propose several novel algorithms for scheduling bursts in OBS networks with and without wavelength conversion capability. Our algorithms try to proactively avoid burst contention likely to occur at downstream nodes. The basic idea is to serialize the bursts on an outgoing link to reduce the number of bursts that may arrive at downstream nodes simultaneously (and thus reducing the burst contention and burst loss probability at downstream nodes). This can be accomplished by judiciously delaying locally assembled bursts beyond a pre-determined offset time at an ingress node using the electronic memory. Compared with the existing algorithms, our proposed algorithms can significantly reduce the loss rate while ensuring that maximum delay of a burst does not exceed its prescribed limit.

ECN Verbose Mode: A Statistical Method for Network Path Congestion Estimation

April 2010

·

34 Reads

This article introduces a simple and effective methodology to determine the level of congestion in a network with an ECN-like marking scheme. The purpose of the ECN bit is to notify TCP sources of an imminent congestion in order to react before losses occur. However, ECN is a binary indicator which does not reflect the congestion level (i.e. the percentage of queued packets) of the bottleneck, thus preventing any adapted reaction. In this study, we use a counter in place of the traditional ECN marking scheme to assess the number of times a packet has crossed a congested router. Thanks to this simple counter, we drive a statistical analysis to accurately estimate the congestion level of each router on a network path. We detail in this paper an analytical method validated by some preliminary simulations which demonstrate the feasibility and the accuracy of the concept proposed. We conclude this paper with possible applications and expected future work.

Constructing end-to-end paths for playing media objects

February 2001

·

16 Reads

This paper describes a framework for constructing network services for accessing media objects. The framework, called end-to-end media paths, provides a new approach for building multimedia applications from component pieces. Based on input from the user and resource requirements from the media object, the system first discovers the sequence of nodes (end-to-end path) that both connect the source device to the sink device and possess sufficient resources to play the object. It then configures the individual nodes along this path with the modules (path segment) that implement the service

Dynamic Hardware Plugins (DHP): Exploiting reconfigurable hardware for high-performance programmable routers

February 2001

·

83 Reads

This paper presents the dynamic hardware plugins (DHP) architecture for implementing multiple networking applications in hardware at programmable routers. By enabling multiple applications to be dynamically loaded into a single hardware device, the DHP architecture provides a scalable mechanism for implementing high-performance programmable routers. The DHP architecture is presented within the context of a programmable router architecture which processes flows in both software and hardware. Possible implementations are described as well as the prototype testbed at Washington University in Saint Louis

A simple FIFO-based scheme for differentiated loss guarantees

July 2004

·

40 Reads

Today's Internet carries an ever broadening range of application traffic with different requirements. This has stressed its original, one-class, best-effort model, and has been one of the main drivers behind the many efforts aimed at introducing QoS. Those efforts have, however, experienced only limited success because their added complexity often conflict with the scalability requirements of the Internet. This has motivated many proposals that try to offer service differentiation while keeping complexity low. This paper shares similar goals and proposes a simple scheme, BoundedRandomDrop (BRD), that supports multiple service classes. BRD focuses on loss differentiation, as although both losses and delay are important performance parameters, the steadily rising speed of Internet links is progressively limiting the impact of delay differentiation. BRD offers strong loss differentiation capabilities with minimal added cost. BRD does not require traffic profiles or admission controls. It guarantees each class losses that, when feasible, are no worse than a specified bound, and enforces differentiation only when required to meet those bounds. In addition, BRD is implemented using a single FIFO queue and a simple random dropping mechanism. The performance of BRD is investigated for a broad range of traffic mixes and shown to consistently achieve its design goals.

Improving the performance of interactive TCP applications using service differentiation

February 2002

·

68 Reads

Interactive TCP applications, such as Telnet and the Web, are particularly sensitive to network congestion. Indeed, congestion-induced queuing and packet loss can be a significant cause of large delays and variability, thereby decreasing user-perceived quality. We consider addressing these effects using service differentiation, by giving priority to interactive applications' traffic in the network. We study different packet marking schemes and handling mechanisms (packet dropping and scheduling) in the network. For marking packets, two approaches are considered. First, we look into application-based marking, and show how the protection of Telnet traffic against loss can eliminate large echo delays caused by retransmit timeouts, and how, by limiting packet loss for Web page downloads, their delays can be significantly reduced, resulting in enhanced interactivity. Second, we consider differentiation based on TCP state, where we present a marking algorithm that prioritizes packets at the source, based on each connection's window size. In addition, we describe the shaping mechanisms required for conformance to agreements with the network. We show how this marking results in good response times for short transfers, which are characteristic of interactive applications, without significantly affecting longer ones.

Rerouting Schemes for Dynamic Traffic Grooming in Optical WDM Mesh Networks

July 2008

·

70 Reads

Traffic grooming in optical WDM mesh networks is a two-layer routing problem to pack low-rate connections effectively onto high-rate lightpaths, which, in turn, are established on wavelength links. We employ the rerouting approach to improve the network throughput under the dynamic traffic model. We propose two rerouting schemes, rerouting at lightpath level (RRAL) and rerouting at connection level (RRAC). A qualitative comparison is made between RRAL and RRAC. We also propose the critical-wavelength-avoiding one-lightpath-limited (CWA-1L) and critical-lightpath-avoiding one-connection-limited (CLA-1C) rerouting heuristics, which are based on the respective rerouting schemes. Simulation results show that rerouting reduces the connection blocking probability significantly.

A Structural Approach for PoP Geo-Location

April 2010

·

125 Reads

Inferring PoP level maps is gaining interest due to its importance to many areas, e.g., for tracking the Internet evolution and studying its properties. In this paper we introduce a novel structural approach to automatically generate large scale PoP level maps using traceroute measurement from multiple locations. The PoPs are first identified based on their structure, and then are assigned a location using collaborated information from several geo-location databases. Using this approach, we could evaluate the accuracy of these databases and suggest means to improve it. The PoP-PoP edges, which are extracted from the traceroutes, present a fairly rich AS-AS connectivity map.

Optimizing the quality of scalable video streams on P2P networks

October 2006

·

68 Reads

The volume of multimedia data, including video, served through peer-to-peer (P2P) networks is growing rapidly. Unfortunately, high bandwidth transfer rates are rarely available to P2P clients on a consistent basis, making it difficult to use P2P networks to stream video for on-line viewing. In this paper, we develop and evaluate on-line algorithms that coordinate the pre-fetching of scalably-coded variable bitrate video. These algorithms are ideal for P2P environments in that they require no knowledge of the future variability or availability of bandwidth, yet produce a playback whose average rate and variability are comparable to the best off-line prefetching algorithms that have total future knowledge. To show this, we develop an off-line algorithm that provably optimizes quality and variability metrics. Using simulations based on actual P2P traces, we compare our on-line algorithms to the optimal off-line algorithm and find that our novel on-line algorithms exhibit near-optimal performance and significantly outperform more traditional pre-fetching methods.

Performance analysis of the General Packet Radio Service

May 2001

·

75 Reads

Presents an efficient and accurate analytical model for the radio interface of the General Packet Radio Service (GPRS) in a GSM network. The model is utilized for investigating how many packet data channels should be allocated for GPRS under a given amount of traffic in order to guarantee appropriate quality of service. The presented model constitutes a continuous-time Markov chain. The Markov model represents the sharing of radio channels by circuit-switched GSM connections and packet-switched GPRS sessions under a dynamic channel allocation scheme. In contrast to previous work, the Markov model explicitly represents the mobility of users by taking into account arrivals of new GSM and GPRS users as well as handovers from neighboring cells. To validate the simplifications necessary for making the Markov model amenable to numerical solution, we provide a comparison of the results of the Markov model with a detailed simulator on the IP level

RRR: recursive round robin scheduler

February 1998

·

71 Reads

Scheduling has been an interesting problem since its inception. In the context of real-time networks, a scheduling algorithm is concerned with dispatching streams of packets sharing the same bandwidth such that certain guaranteed performance for each stream like rate and delay bound is provided. This function has a wide range of applications in network elements such as host adaptors, routers and switches. This paper proposes and describes a new scheduling algorithm named as recursive round robin (RRR) scheduler for scheduling fixed sized packets. It is based on the concept of the construction of a scheduling tree in which distinct cell streams are scheduled recursively. Special emphasis is placed on the design and analysis of the scheduler. A delay bound is analytically derived for the scheduler and verified using simulation. It is shown that the work conserving version of the scheduler is fair. Fairness indexes for the work conserving scheduler are analytically derived. The simple nature of this algorithm makes it possible to implement it at very high speeds, while considerably reducing the implementation cost

The Active Process Interaction with Its Environment
Adding programmability to the interior of the network provides an infrastructure for distributed applications. Specifically, network management and control applications require access to and control of network device state. For example, a routing load balancing application may require access to the routing table, and a congestion avoidance application may require interface congestion information. There are fundemental problems associated with this interaction. In this paper, we study the basic tradeoffs associated with the interaction between an active process and its environment and presenting ABLE++ as an example architecture. Most notably, we explore two design trade-offs, efficiency vs. abstraction and application flexibility vs. security. We demonstrate the advantages of the architecture by implementing a congestion avoidance algorithm.

Key differences between HTTP/1.0 and HTTP/1.1

May 1999

·

486 Reads

The HTTP/1.1 protocol is the result of four years of discussion and debate among a broad group of Web researchers and developers. It improves upon its phenomenally successful predecessor, HTTP/1.0, in numerous ways. We discuss the differences between HTTP/1.0 and HTTP/1.1, as well as some of the rationale behind these changes.

The design and implementation of the NCTUns 1.0 network simulator

June 2003

·

128 Reads

This paper presents the design and implementation of the NCTUns 1.0 network simulator, which is a high-fidelity and extensible network simulator capable of simulating both wired and wireless IP networks. By using an enhanced simulation methodology, a new simulation engine architecture, and a distributed and open-system architecture, the NCTUns 1.0 network simulator is much more powerful than its predecessor––the Harvard network simulator, which was released to the public in 1999. The NCTUns 1.0 network simulator consists of many components. In this paper, we will present the design and implementation of these components and their interactions in detail.

The 1999 DARPA off-line intrusion detection evaluation

October 2000

·

217 Reads

Eight sites participated in the second Defense Advanced Research Projects Agency (DARPA) off-line intrusion detection evaluation in 1999. A test bed generated live background traffic similar to that on a government site containing hundreds of users on thousands of hosts. More than 200 instances of 58 attack types were launched against victim UNIX and Windows NT hosts in three weeks of training data and two weeks of test data. False-alarm rates were low (less than 10 per day). The best detection was provided by network-based systems for old probe and old denial-of-service (DoS) attacks and by host-based systems for Solaris user-to-root (U2R) attacks. The best overall performance would have been provided by a combined system that used both host- and network-based intrusion detection. Detection accuracy was poor for previously unseen, new, stealthy and Windows NT attacks. Ten of the 58 attack types were completely missed by all systems. Systems missed attacks because signatures for old attacks did not generalize to new attacks, auditing was not available on all hosts, and protocols and TCP services were not analyzed at all or to the depth required. Promising capabilities were demonstrated by host-based systems, anomaly detection systems and a system that performs forensic analysis on file system data.

Anticipating SMIL 2.0: The developing cooperative infrastructure for multimedia on the Web

May 1999

·

23 Reads

SMIL is the W3C recommendation for bringing synchronized multimedia to the Web. Version 1.0 of SMIL was accepted as a recommendation in June 1998. Work is expected to be soon underway for preparing the next version of SMIL, version 2.0. Issues that will need to be addressed in developing version 2.0 include not just adding new features but also establishing SMIL's relationship with various related existing and developing W3C efforts. In this paper we offer some suggestions for how to address these issues. Potential new constructs with additional features for SMIL 2.0 are presented. Other W3C efforts and their potential relationship with SMIL 2.0 are discussed. To provide a context for discussing these issues, this paper explores various approaches for integrating multimedia information with the World Wide Web. It focuses on the modeling issues on the document level and the consequences of the basic differences between text-oriented Web-pages and networked multimedia presentations.

Notes on SDL-2000 for the new millennium

May 2001

·

15 Reads

Specification and description language (SDL) is the premier language for specification, design and development of real time systems, and in particular for telecommunications software. SDL-2000 became the international standard in force in November 1999, replacing the previous version. This paper gives an overview of SDL development in ITU-T up to the end of 1999. It covers a short history of SDL including details of the updates to SDL for SDL-2000. The paper fills a gap between previously published tutorials and the current SDL standard by providing notes on the updates. Plans for the further development of SDL in the 21st Century and the role of the SDL Forum Society are briefly considered.

The formal semantics of SDL-2000: Status and perspectives

June 2003

·

45 Reads

In November 1999, the current version of specification and description language (SDL), commonly referred to as SDL-2000, passed through ITU-T. In November 2000, the formal semantics of SDL- 2000 was officially approved to become part of the SDL language definition. It covers both the static and the dynamic semantics, and is based on the formalism of abstract state machines (ASMs). To support executability, the formal semantics defines, for each SDL specification, reference ASM code, which enables an SDL-to-ASM-compiler.In this paper, we briefly survey and compare existing approaches to define the semantics of SDL formally. The ITU-T approach is then outlined in more detail, addressing the following steps: (1) mapping of non-basic language constructs to the core language, (2) checking of static semantics conditions, (3) definition of the SDL abstract machine (SAM), and (4) definition of the SDL virtual machine (SVM). The paper concludes with experiences from the SDL-to-ASM-compiler project. It is proposed that the SDL-2000 semantics can be adapted and extended to formally define the meaning of UML 2.0 class, composite structure, and statechart diagrams.

Bluetooth based wireless Internet applications for indoor hot spots: Experience of a successful experiment during CeBIT 2001

February 2003

·

41 Reads

Wireless Internet access based on wireless LAN or Bluetooth networks will become popular within the next few years. New services will be offered using location and personal profiles to filter Internet access. Moreover, “Push” services will allow unsolicited event-based information transfer depending on user configurations. We have introduced such services as “The Mobile Fairguide” during CeBIT 2001 in a Bluetooth network that covered a full hall of 25,000 m2 with 130 base stations. We were able to show that Bluetooth is a usable technology for such applications, especially with PDAs as terminal devices. Moreover, we tested our architecture in a live scenario with respect to scalability and mobility.

A space-based end-to-end prototype geographic information network for lunar and planetary exploration and emergency response (2002 and 2003 field experiments)

April 2005

·

27 Reads

Communications and imaging experiments conducted in the Arizona desert during July of 2002 with the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS) helped to identify a fundamental suite of scientific instruments focused on surface composition and temperature determination for the calibration and validation of NASA and USGS spaceborne and airborne sensors and to integrate them with a hybrid mobile wireless and satellite network for lunar and planetary exploration and emergency response. The 2002 experiment focused on the exchange of remotely sensed and ground truth geographic information between analysts and field scientists. That experiment revealed several modifications that would enhance the performance and effectiveness of geographic information networks (GIN) for lunar and planetary exploration and emergency response. Phase 2 experiments conducted during June 2003 at the USGS Earth Resources and Observation Systems (EROS) Data Center’s geologic imaging test site near Dinosaur National Monument in the NE Utah desert incorporated several of the lessons learned from the 2002 experiment and successfully added five major new components: (1) near-real-time hyperspectral and multispectral satellite image acquisition, (2) remotely controlled and coordinated mobile real-time ground sensor measurements during the imaging satellite overpass, (3) long-delay optimized Transmission Control Protocol/Internet Protocol TCP/IP protocols to improve network performance over geosynchronous communications satellite circuits, (4) distributed, multinode parallel computing on NASA’s Internet Power GRID (IPG), and (5) near-real-time validation of satellite imagery as part of a successful test of the NASA–USGS National Emergency Mapping Information System.

Telephony in the year 2005

February 1999

·

10 Reads

The growth of packet-based voice services is leading to integration of voice and other services over packet switched data networks. This paper explores a possible path that the telephone service industry may follow as this integration is accelerated by technological advances that improve the capabilities of packet-based services while reducing their costs.

Secure data aggregation in WSNs: A comprehensive overview. Computer Networks, 53: 2022-2037

August 2009

·

4,058 Reads

Wireless sensor networks often consists of a large number of low-cost sensor nodes that have strictly limited sensing, computation, and communication capabilities. Due to resource restricted sensor nodes, it is important to minimize the amount of data transmission so that the average sensor lifetime and the overall bandwidth utilization are improved. Data aggregation is the process of summarizing and combining sensor data in order to reduce the amount of data transmission in the network. As wireless sensor networks are usually deployed in remote and hostile environments to transmit sensitive information, sensor nodes are prone to node compromise attacks and security issues such as data confidentiality and integrity are extremely important. Hence, wireless sensor network protocols, e.g., data aggregation protocol, must be designed with security in mind. This paper investigates the relationship between security and data aggregation process in wireless sensor networks. A taxonomy of secure data aggregation protocols is given by surveying the current “state-of-the-art” work in this area. In addition, based on the existing research, the open research areas and future research directions in secure data aggregation concept are provided.

Improving TCP performance in integrated wireless communications networks. Computer Networks, Science Direct, 47(2), 219-237

February 2005

·

148 Reads

Many analytical and simulation-based studies of TCP performance in wireless environments assume an error-free and congestion-free reverse channel that has the same capacity as the forward channel. Such an assumption does not hold in many real-world scenarios, particularly in the hybrid networks consisting of various wireless LAN (WLAN) and cellular technologies. In this paper, we first study, through extensive simulations, the performance characteristics of four representative TCP schemes, namely TCP New Reno, SACK, Veno, and Westwood, under the network conditions of asymmetric end-to-end link capacities, correlated wireless errors, and link congestion in both forward and reverse directions. We then propose a new TCP scheme, called TCP New Jersey, which is capable of distinguishing wireless packet losses from congestion packet losses, and reacting accordingly. TCP New Jersey consists of two key components, the timestamp-based available bandwidth estimation (TABE) algorithm and the congestion warning (CW) router configuration. TABE is a TCP-sender-side algorithm that continuously estimates the bandwidth available to the connection and guides the sender to adjust its transmission rate when the network becomes congested. TABE is immune to the ACK drops as well as ACK compression. CW is a configuration of network routers such that routers alert end stations by marking all packets when there is a sign of an incipient congestion. The marking of packets by the CW-configured routers helps the sender of the TCP connection to effectively differentiate packet losses caused by network congestion from those caused by wireless link errors. Our simulation results show that TCP New Jersey is able to accurately estimate the available bandwidth of the bottleneck link of an end-to-end path; and the TABE estimator is immune to link asymmetry, bi-directional congestion, and the relative position of the bottleneck link in the multi-hop end-to-end path. The proactive congestion avoidance control mechanism proposed in our scheme minimizes the network congestion, reduces the network volatility, and stabilizes the queue lengths while achieving more throughput than other TCP schemes.

Bro: A system for detecting network intruders in real-time. Computer Networks 31(23-24

December 1999

·

358 Reads

We describe Bro, a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder's traffic transits. We give an overview of the system's design, which emphasizes high-speed (FDDI-rate) monitoring, real-time notification, clear separation between mechanism and policy, and extensibility. To achieve these ends, Bro is divided into an `event engine' that reduces a kernel-filtered network traffic stream into a series of higher-level events, and a `policy script interpreter' that interprets event handlers written in a specialized language used to express a site's security policy. Event handlers can update state information, synthesize new events, record information to disk, and generate real-time notifications via syslog. We also discuss a number of attacks that attempt to subvert passive monitoring systems and defenses against these, and give particulars of how Bro analyzes the six applications integrated into it so far: Finger, FTP, Portmapper, Ident, Telnet and Rlogin. The system is publicly available in source code form.

Joint loss pattern characterization and unequal interleaved FEC protection for robust H.264 video distribution over wireless LAN

December 2005

·

52 Reads

The recently adopted H.264 standard achieves efficient video encoding and bandwidth savings. Thus, designing communication protocols and QoS control mechanisms for H.264 video distribution over wireless IP networks is a topic of intense research interest. Delivering video streams to terminals via a wireless last hop is indeed a challenging task due to the varying nature of the wireless link. While a common approach suggests exploiting the variations of the wireless channel, an alternative is to exploit characteristics of the video stream to improve the transmission. In this paper, we combine both approaches through an efficient wireless loss characterization and a low-delay unequal interleaved FEC protection. Besides deriving new QoS metrics for FEC block allocation, the wireless loss characterization is as well used to adjust the interleaving level depending on the loss correlation exhibited by the wireless channel. This novel unequal interleaved FEC (UI-FEC) protocol allows graceful video quality degradation over error-prone wireless links while minimizing the overall bandwidth consumption and the end-to-end latency.

Network monitoring and performance evaluation in a 3.5G network

October 2007

·

125 Reads

Monitoring network performance and status is a fundamental task for network operators as it directly impacts the quality of the offered services and hence user satisfaction. For this purpose a consolidated approach, which is largely adopted by network operators, is based on the so-called KPIs (key performance indicators). In this paper, we propose and discuss a set of KPIs to monitor network performance of the new HSDPA enhanced UMTS infrastructure. KPI statistics are collected and analysed from the novel HSDPA network of H3G, one of the major Italian mobile network operators.

Potential and limitations of a teleteaching environment based on H.323 audio-visual communication systems

December 2000

·

29 Reads

The effective support of teleteaching services requires the development of multimedia collaboration systems that are capable of providing real-time and high quality audio-visual communication among distributed instructors and students. In the absence of such specialised systems, technologies tailored to other services are being considered for teleteaching services as well. Such a technology is the H.323 audio-visual communication technology developed to support video communication over IP. Although teleteaching and videoconferencing have similar QoS requirements, teleteaching functional requirements are a superset of those of videoconferencing. In this paper, the suitability of H.323 technology and currently available products to support teleteaching services is investigated, based on experience gained during a related deployment at the University of Athens.

ITU-T standardization activities for interactive multimedia communications on packet-based networks: H.323 and related recommendations

February 1999

·

8 Reads

The Telecommunication Sector of the International Telecommunication Union (ITU-T) has developed a series of recommendations together comprising the H.323 system that provides for multimedia communications in packet-based (inter)networks. This series of recommendations describe the types and functions of H.323 terminals and other H.323 devices as well as their interactions. The H.323 series of recommendations includes audio, video and data streams, but an H.323 system minimally requires only an audio stream to be supported. Motivated by straightforward interoperability with the ISDN and PSTN networks and a variety of other protocols, the recommendation H.323 has been accepted as being the standard for IP telephony, developed by the ITU-T and broadly backed by the industry—which is also adopted by both the Voice over IP (VoIP) forum and the European Telecommunication Standards Institute (ETSI). This paper presents an overview of the H.323 system architecture with all its functional components and protocols and points out all the related specifications.

The MainStreetXpress 36190: A scalable and highly reliable ATM core services switch

March 1999

·

41 Reads

This paper describes the architecture and some of the main features of the MainStreetXpress 36190 ATM core services switch. This switch has been designed from the start as a fully functional, central office class ATM switch which can be used already today to create the backbone layer for commercial ATM-based multi-service networks. Due to its modular hardware and software architecture, it provides the required flexibility with respect to interface requirements, signalling/control capabilities and service support. The architecture together with the ATM chip set specifically developed for this switch allows to extend the data throughput of the MainStreetXpress 36190 from 5 Gbit/s into the Tbit/s range by just scaling the central switch fabric. The call processing capability can be scaled well into the MBHCA range (Million Busy Hour Call Attempts) by taking advantage of the specifically developed multi-processor control platform. Full redundancy for all central hardware components and optional redundancy for interface boards and external transmission lines is provided. In combination with a comprehensive hardware/software maintenance and recovery concept based on the vast experience with the EWSD line of digital switches these capabilities provide the reliability and availability required for large ATM carrier networks. Together with the full line of MainStreetXpress components – ranging from network terminations and LAN service units via flexible access multiplexers/switches to scalable ATM multi-service switches and a variety of specific server types all controlled by a common network and service management – the MainStreetXpress 36190 provides carriers with the means to realize fully functional, cost effective ATM based multi-service platforms already today.

Virtual reality movies-real-time streaming of 3D objects

November 1999

·

79 Reads

Powerful servers for computation and storage, high-speed networking resources, and high-performance 3D graphics workstations, which are typically available in scientific research environments, potentially allow the development and productive application of advanced distributed high-quality multimedia concepts. Several bottlenecks, often caused by the inefficient design and software implementation of current systems, prevent utilization of the offered performance of existing hardware and networking resources. We present a system architecture, which supports streamed online presentation of series of 3D objects. In the case of expensive simulations on a supercomputer, results are increasingly represented as 3D geometry to support immersive exploration, presentation, and collaboration techniques. Accurate representation and high-quality display are fundamental requirements to avoid misinterpretation of the data. Our system consists of the following parts: a preprocessor to create a special 3D representation – optimized under transmission and streamed presentation issues in a high-performance working environment, an efficiently implemented streaming server, and a client. The client was implemented as a web browser plugin, integrating a viewer with high-quality virtual reality presentation (stereoscopic displays), interaction (tracking devices), and hyperlinking capabilities.

Group registration with distributed databases for location tracking in 3G wireless networks

June 2008

·

31 Reads

The increase of subscribers in wireless networks has led to the need for efficient location tracking strategies. Location tracking is used to keep track of a Mobile Terminal (MT). The network retains the Registration Area (RA), where the MT last updated its location, so when an incoming call arrives for the MT, the network with the help of a location tracking strategy can find the area where the MT resides and then deliver the call. In this paper, we introduce a 2-level distributed database architecture combined with the Group Registration (GR) location tracking strategy to be used in 3G wireless networks. The GR strategy reduces the location management total cost, by updating the location of MTs in an RA with a single route response message to the HSS (Home Subscriber Server). More specifically, the IDs of the MTs newly moving into an RA are buffered and sent to the HSS for location updating in the route response message of the next incoming call to any MT in the RA. An analytical model is developed and numerical results are presented. It is shown that the GR strategy integrated with a 2-level distributed databases architecture in 3G networks can achieve location management cost reduction compared to costs of the distributed databases without the GR strategy and the GR strategy without distributed databases. Moreover, the proposed strategy results in small call delivery latency.

Diagnosis of capacity bottlenecks via passive monitoring in 3G networks: An empirical analysis

March 2007

·

155 Reads

In this work we address the problem of inferring the presence of a capacity bottleneck from passive measurements in a 3G mobile network. The study is based on one month of packet traces collected in the UMTS core network of mobilkom austria AG & Co KG, the leading mobile telecommunications provider in Austria, EU. During the measurement period a bottleneck link in the UMTS core network was revealed and removed, therefore the traces enable the accurate analysis and comparison of the traffic behavior in the two network conditions: with and without a capacity bottleneck.Two approaches to bottleneck detection are investigated. The first one is based on the signal analysis of the marginal rate distribution of the traffic aggregate along one day cycle. Since TCP-controlled traffic dominates the overall traffic mix, the presence of a bottleneck strains the aggregate rate distribution and compresses it against the capacity limit during the peak hour. The second approach is based on the analysis of several TCP performance parameters, e.g. estimated frequency of retransmissions. Such statistics are unstable due to the presence of few top users, but this effect can be counteracted with simple filtering methods. Both approaches are validated via simulations.Our results show that both approaches can be used to provide early warning about future occurrences of capacity bottlenecks, and can complement other existing monitoring tools in the operation of a production network.

MBMS Handover control: A new approach for efficient handover in MBMS enabled 3G cellular networks

December 2007

·

205 Reads

With the introduction of Multimedia Broadcast Multicast Service (MBMS) system in 3rd Generation (3G) mobile cellular networks, the Radio Network Controller (RNC) for radio resource efficiency can use either Point-to-Point (one Dedicated Channel per User Equipment (UE) in the Cell) or Point-to-Multipoint (one Common Channel shared by all the UEs in the Cell) transmission mode, to distribute the same MBMS content in a Cell. Thus, the mobile users that are on the move and receive an MBMS service may have to deal with dynamic changes of network resources when crossing the Cell’s edge, introducing new types of handovers (MBMS Handovers); from Point-to-Point to Point-to-Multipoint transmission mode Cell and vice versa. Executing an MBMS Handover using the existing handover control approach, as described in 3GPP TR 25.922 specifications, results in inefficiencies due to the approach used. In this paper, we highlight these inefficiencies and propose and evaluate a new handover control approach to address them. As illustrated in our performance evaluation, compared to the existing approach, the proposed approach improves the overall system capacity and link performance and avoids any QoS degradation (seamless handover) during an MBMS Handover.

Top-cited authors