Computer Networks

Published by Elsevier
Online ISSN: 1389-1286
Publications
Article
Some of multi-path routing algorithms in MANETs use multiple paths simultaneously. These algorithms can attempt to find node-disjoint to achieve higher fault tolerance. By using node-disjoint paths, it is expected that the end-to-end delay in each case should be independent of each other. However, because of natural properties and medium access mechanisms in ad hoc networks, such as CSMA/CA, the end-to-end delay between any source and destination depends on the pattern of communication in the neighborhood region. In this case some of the intermediate nodes should be silent to reverence their neighbors and this matter increases the end-to-end delay. To avoid this problem, multi-path routing algorithms can use zone-disjoint paths instead of node-disjoint paths. In this paper we propose a new multi-path routing algorithm that selects zone-disjoint paths, using omni-directional antenna. We evaluate our algorithm in several different scenarios. The simulation results show that our approach is very effective in decreasing routing overhead and end-to-end delay.
 
Conference Paper
Cooperative relaying and dynamic-spectrum-access/cognitive techniques are promising solutions to increase the capacity and reliability of wireless links by exploiting the spatial and frequency diversity of the wireless channel. Yet, the combined use of cooperative relaying and dynamic spectrum access in multi-hop networks with decentralized control is far from being well understood. We study the problem of network throughput maximization in cognitive and cooperative ad hoc networks through joint optimization of routing, relay assignment and spectrum allocation. We derive a decentralized algorithm that solves the power and spectrum allocation problem for two common cooperative transmission schemes, decode-and-forward (DF) and amplify-and-forward (AF), based on convex optimization and arithmetic–geometric mean approximation techniques. We then propose and design a practical medium access control protocol in which the probability of accessing the channel for a given node depends on a local utility function determined as the solution of the joint routing, relay selection, and dynamic spectrum allocation problem. Therefore, the algorithm aims at maximizing the network throughput through local control actions and with localized information only. Through discrete-event network simulations, we finally demonstrate that the protocol provides significant throughput gains with respect to baseline solutions.
 
Conference Paper
Open architecture networks provide applications with fine-grained control over network elements. With this control comes the risk of misuse and new challenges to security beyond those present in conventional networks. One particular security requirement is the ability of applications to protect the secrecy and integrity of transmitted data while still allowing trusted active elements within the network to operate on that data. This paper describes mechanisms for identifying trusted nodes within a network and securely deploying adaptation instructions to those nodes while protecting application data from unauthorized access and modification. Promising experimental results of our implementation within the conductor adaptation framework are also presented, suggesting that such features can be incorporated into real networks
 
Conference Paper
Multicast provides an efficient way of distributing multimedia information to a set of destinations simultaneously with the highest possible data rate. To enhance the multicasting performance, an adaptive multicast routing (AMR) approach for the satellite-terrestrial network (a hybrid network interconnected by VSAT systems) is proposed. It can dynamically adjust the routing path to obtain a minimal routing cost through a re-routing operation and support dynamic membership, joining and leaving. The simulation results demonstrate that the AMR has better performance and lower routing costs than any other Internet multicast algorithm
 
Conference Paper
This paper presents schemes for transmitting (high level) messages by transferring cells from a sender to a receiver and back. Forward erasure correcting codes are used to cope with cell losses. Smooth transmission ensures that the delay of all transmitted cells is the same. Adaptive forward erasure correcting protocols change the number of redundant cells, used to cope with cell losses, automatically on the fly (as a respond to the performance of the connecting link/path). We present schemes that ensure smooth transmission when no losses occur and adaptive erasure correction when cells are frequently lost
 
Conference Paper
The resolution of a host name to an IP-address is a necessary predecessor to connection establishment and HTTP exchanges. Nonetheless, DNS resolutions often involve multiple remote name-servers and prolong Web response times. To alleviate this problem name servers and Web browsers cache query results. Name servers currently incorporate passive cache management where records are brought into the cache only as a result of clients' requests and are used for the TTL duration (a TTL value is provided with each record). We propose and evaluate different enhancements to passive caching that reduce the fraction of HTTP connection establishments that are delayed by long DNS resolutions. Renewal policies refresh selected expired cached entries by issuing unsolicited queries. Trace-based simulations using Web proxy logs demonstrated that a significant fraction of cache misses can be eliminated with a moderate overhead. Simultaneous-validation (SV) transparently uses expired records. A DNS query is issued if the respective cached entry is no longer fresh, but concurrently, the expired entry is used to connect to the Web server and fetch the requested content. The content is served only if the expired records used turn out to be in agreement with the query response
 
Conference Paper
The author proposes a mechanism called the priority token bank for admission control, scheduling, and policing in integrated-services networks, where arrival processes and performance objectives can vary greatly from one packet stream to another. There are two principal components to the priority token bank: accepting or rejecting requests to admit entire packet streams, where acceptance means guaranteeing that the packet stream's performance objectives will be met, and scheduling the transmission of packets such that performance objectives are met, even under heavy loads. To the extent possible, the performance of traffic is also optimized beyond the requirements. The performance achieved with the priority token bank is compared to that of other typical algorithms. It is shown that, when operating under the constraint that the performance objectives of applications such as packet voice, HDTV (high-definition television) and bulk data transfer must be met in an ATM (asynchronous transfer mode) network, the mean delay experienced by other traffic is much better with the priority token bank. Equivalently, to achieve the same performance with other algorithms, it would be necessary to greatly reduce the network load
 
Conference Paper
3rd generation mobile systems are mostly based on the wideband CDMA platform to support high bit rate packet data services. One important component to offer packet data service in CDMA is a burst admission control algorithm. In this paper, we propose and study a novel jointly adaptive burst admission algorithm, namely the jointly adaptive burst admission-spatial dimension algorithm (JABA-SD) to effectively allocate valuable resources in wideband CDMA systems to burst requests. In the physical layer, we have a variable rate channel-adaptive modulation and coding system which offers variable throughput depending on the instantaneous channel condition. In the MAC layer, we have an optimal multiple-burst admission algorithm. We demonstrate that synergy could be attained by interactions between the adaptive physical layer and the burst admission layer. We formulate the problem as an integer programming problem and derive an optimal scheduling policy for the jointly adaptive design. Both the forward link and the reverse link burst requests are considered and the system is evaluated by dynamic simulations which takes into account of the user mobility, power control and soft-handoff. We found that significant performance improvement, in terms of average packet delay, data user capacity and coverage, could be achieved by our scheme compared to the existing burst assignment algorithms.
 
Conference Paper
To support quality of service guarantees in a scalable manner, aggregate scheduling has attracted a lot of attention in the networking community. However, while there are a large number of results available for flow-based scheduling algorithms, few such results are available for aggregate-based scheduling. We study a network implementing guaranteed rate (GR) scheduling with first-in-first-out (FIFO) aggregation. We derive an upper bound on the worst case end-to-end delay for the network. We show that while for a specific network configuration, the derived delay bound is not restricted by the utilization level on the guaranteed rate, it is so for a general network configuration.
 
Conference Paper
We consider all-optical TDM/WDM broadcast and select networks. We assume that each network node is equipped with one fixed transmitter and one tunable receiver; tuning times are assumed to be not negligible with respect to the slot time. We discuss efficient scheduling algorithms to assign TDM/WDM slots to multicast traffic in such networks. Given the problem complexity, heuristic algorithms based on the Tabu Search methodology are proposed, and their performance is assessed using randomly created request matrices based on two types of multicast traffic patterns: a video-conference, and a server distribution traffic pattern. The considered performance index is the frame length required to schedule a given traffic request matrix
 
Conference Paper
In today's IP networks most of the network control and management tasks are performed at the end points. As a result, many important network functions cannot be optimized due to lack of sufficient support from the network. The growing need for quality guaranteed services brought on suggestions to add more computational power to the network elements. This paper studies the algorithmic power of networks whose routers are capable of performing complex tasks. It presents a new model that captures the hop-by-hop datagram forwarding mechanism deployed in today's IP networks, as well as the ability to perform complex computations in network elements as proposed in the active networks paradigm. Using our framework, we present and analyze distributed algorithms for basic problems that arise in the control and management of IP networks. These problems include: route discovery, message dissemination, topology discovery, and bottleneck detection. Our results prove that, although adding computation power to the routers increases the message delay, it shortens the completion time for many tasks. The suggested model can be used to evaluate the contribution of added features to a router, and allows the formal comparison of different proposed architectures
 
Conference Paper
In the context of heterogeneous networks and diverse communication devices, real person-to-person communication can be achieved with a universal personal identification (UPI) that uniquely identifies an individual and is independent of the access networks and communication devices. Hierarchical mobility management techniques (MMTs) have been proposed to support UPI. Traditional replication methods for improving the performance of such MMTs are threshold-based. We present optimal replication algorithms that minimize the network messaging cost based on network structure, communication link cost, and user calling and mobility statistics. We develop our algorithms for both unicast and multicast replica update strategies. The performance of these replication algorithms is studied via large scale network simulations and compared to results obtained from the threshold-based method.
 
Conference Paper
In a traditional anti-jamming system a transmitter who wants to send a signal to a single sender spreads the signal power over a wide frequency spectrum with the aim of stopping a jammer from blocking the transmission. In this paper we consider the case that there are multiple receivers and the sender wants to broadcast a message to all receivers such that colluding groups of receivers cannot jam the reception of any other receiver. We propose efficient coding methods that achieve this goal and link this problem to well-known problems in combinatorics. We also link a generalisation of this problem to the key distribution pattern problem studied in combinatorial cryptography.
 
Conference Paper
Distributed multimedia applications are sensitive to the quality of service (QoS) delivered by underlying communication networks. The main question this work addresses is how to adapt multimedia applications to the QoS delivered by the network and vice versa. We introduce QoSockets, an extension to the sockets mechanism to enable QoS reservation and management. QoSockets automatically generates the instrumentation to monitor QoS. It scrutinizes interactions among applications and transport protocols and collects in QoS management information bases (MIB) statistics on the QoS delivered. The main advantages of QoSockets are the following: (1) support of single API for transport-layer QoS negotiation, connection establishment, and data transmission; and of single API for OS QoS negotiation; (2) support of a single QoS negotiation protocol; (3) generality across application QoS needs; (4) automatic management of application QoS needs. QoSockets are available for Solaris and Linux and support RSVP, ATM adaptation, ST-II, TCP/UDP, and Unix native protocols
 
Conference Paper
We recognize two trends in router design: increasing pressure to extend the set of services provided by the router and increasing diversity in the hardware components used to construct the router. The consequence of these two trends is that it is becoming increasingly difficult to map the services onto the underlying hardware. Our response to this situation is to define a virtual router architecture, called VERA, that hides the hardware details from the forwarding functions. This paper presents the details of VERA and reports preliminary experiences implementing various aspects of the architecture
 
Conference Paper
In energy-limited wireless sensor networks, network clustering and sensor scheduling are two efficient techniques for minimizing node energy consumption and maximizing network coverage lifetime. When integrating the two techniques, the challenges are how to select cluster heads and active nodes. In this paper, we propose a coverage-aware clustering protocol. In the proposed protocol, we define a cost metric that favors those nodes being more energy-redundantly covered as better candidates for cluster heads and select active nodes in a way that tries to emulate the most efficient tessellation for area coverage. Our simulation results show that the network coverage lifetime can be significantly improved compared with an existing protocol.
 
Conference Paper
The X-Bone dynamically deploys and manages Internet overlays to reduce their configuration effort and increase network component sharing. The X-Bone discovers, configures, and monitors network resources to create overlays over existing IP networks. Overlays are useful for deploying overlapping virtual networks on a shared infrastructure and for simplifying topology. The X-Bone extends current overlay management by adding dynamic resource discovery, deployment, and monitoring and allows simultaneous participation in multiple overlays. Its two-layer IP in IP tunneled overlays support existing applications and unmodified routing, multicast, and DNS services in unmodified operating systems. This two-layer scheme uniquely supports recursive overlays, useful for fault tolerance and dynamic relocation. The X-Bone uses multicast to simplify resource discovery, and provides secure deployment as well as secure overlays. This paper presents the X-Bone architecture, and discusses its components and features, and their performance impact
 
Conference Paper
Optical burst switching (OBS) is a promising paradigm for the next-generation Internet infrastructure. In OBS, a key problem is to schedule bursts on wavelength channels with both fast and bandwidth efficient algorithms so as to reduce burst loss. To date, most scheduling algorithms avoid burst contention locally (or reactively). In this paper, we propose several novel algorithms for scheduling bursts in OBS networks with and without wavelength conversion capability. Our algorithms try to proactively avoid burst contention likely to occur at downstream nodes. The basic idea is to serialize the bursts on an outgoing link to reduce the number of bursts that may arrive at downstream nodes simultaneously (and thus reducing the burst contention and burst loss probability at downstream nodes). This can be accomplished by judiciously delaying locally assembled bursts beyond a pre-determined offset time at an ingress node using the electronic memory. Compared with the existing algorithms, our proposed algorithms can significantly reduce the loss rate while ensuring that maximum delay of a burst does not exceed its prescribed limit.
 
Conference Paper
The provision of advanced computational services within networks is rapidly becoming both feasible and economical. We present a general approach to the problem of configuring application sessions that require intermediate processing by showing how the session configuration problem can be transformed to a conventional shortest path problem for unicast sessions or to a conventional Steiner tree problem for multicast sessions. We show, through a series of examples, that the method can be applied to a wide variety of different situations
 
Conference Paper
This article introduces a simple and effective methodology to determine the level of congestion in a network with an ECN-like marking scheme. The purpose of the ECN bit is to notify TCP sources of an imminent congestion in order to react before losses occur. However, ECN is a binary indicator which does not reflect the congestion level (i.e. the percentage of queued packets) of the bottleneck, thus preventing any adapted reaction. In this study, we use a counter in place of the traditional ECN marking scheme to assess the number of times a packet has crossed a congested router. Thanks to this simple counter, we drive a statistical analysis to accurately estimate the congestion level of each router on a network path. We detail in this paper an analytical method validated by some preliminary simulations which demonstrate the feasibility and the accuracy of the concept proposed. We conclude this paper with possible applications and expected future work.
 
Conference Paper
This paper describes a framework for constructing network services for accessing media objects. The framework, called end-to-end media paths, provides a new approach for building multimedia applications from component pieces. Based on input from the user and resource requirements from the media object, the system first discovers the sequence of nodes (end-to-end path) that both connect the source device to the sink device and possess sufficient resources to play the object. It then configures the individual nodes along this path with the modules (path segment) that implement the service
 
Conference Paper
This paper presents the dynamic hardware plugins (DHP) architecture for implementing multiple networking applications in hardware at programmable routers. By enabling multiple applications to be dynamically loaded into a single hardware device, the DHP architecture provides a scalable mechanism for implementing high-performance programmable routers. The DHP architecture is presented within the context of a programmable router architecture which processes flows in both software and hardware. Possible implementations are described as well as the prototype testbed at Washington University in Saint Louis
 
Conference Paper
Today's Internet carries an ever broadening range of application traffic with different requirements. This has stressed its original, one-class, best-effort model, and has been one of the main drivers behind the many efforts aimed at introducing QoS. Those efforts have, however, experienced only limited success because their added complexity often conflict with the scalability requirements of the Internet. This has motivated many proposals that try to offer service differentiation while keeping complexity low. This paper shares similar goals and proposes a simple scheme, BoundedRandomDrop (BRD), that supports multiple service classes. BRD focuses on loss differentiation, as although both losses and delay are important performance parameters, the steadily rising speed of Internet links is progressively limiting the impact of delay differentiation. BRD offers strong loss differentiation capabilities with minimal added cost. BRD does not require traffic profiles or admission controls. It guarantees each class losses that, when feasible, are no worse than a specified bound, and enforces differentiation only when required to meet those bounds. In addition, BRD is implemented using a single FIFO queue and a simple random dropping mechanism. The performance of BRD is investigated for a broad range of traffic mixes and shown to consistently achieve its design goals.
 
Conference Paper
Interactive TCP applications, such as Telnet and the Web, are particularly sensitive to network congestion. Indeed, congestion-induced queuing and packet loss can be a significant cause of large delays and variability, thereby decreasing user-perceived quality. We consider addressing these effects using service differentiation, by giving priority to interactive applications' traffic in the network. We study different packet marking schemes and handling mechanisms (packet dropping and scheduling) in the network. For marking packets, two approaches are considered. First, we look into application-based marking, and show how the protection of Telnet traffic against loss can eliminate large echo delays caused by retransmit timeouts, and how, by limiting packet loss for Web page downloads, their delays can be significantly reduced, resulting in enhanced interactivity. Second, we consider differentiation based on TCP state, where we present a marking algorithm that prioritizes packets at the source, based on each connection's window size. In addition, we describe the shaping mechanisms required for conformance to agreements with the network. We show how this marking results in good response times for short transfers, which are characteristic of interactive applications, without significantly affecting longer ones.
 
Conference Paper
Traffic grooming in optical WDM mesh networks is a two-layer routing problem to pack low-rate connections effectively onto high-rate lightpaths, which, in turn, are established on wavelength links. We employ the rerouting approach to improve the network throughput under the dynamic traffic model. We propose two rerouting schemes, rerouting at lightpath level (RRAL) and rerouting at connection level (RRAC). A qualitative comparison is made between RRAL and RRAC. We also propose the critical-wavelength-avoiding one-lightpath-limited (CWA-1L) and critical-lightpath-avoiding one-connection-limited (CLA-1C) rerouting heuristics, which are based on the respective rerouting schemes. Simulation results show that rerouting reduces the connection blocking probability significantly.
 
Conference Paper
Inferring PoP level maps is gaining interest due to its importance to many areas, e.g., for tracking the Internet evolution and studying its properties. In this paper we introduce a novel structural approach to automatically generate large scale PoP level maps using traceroute measurement from multiple locations. The PoPs are first identified based on their structure, and then are assigned a location using collaborated information from several geo-location databases. Using this approach, we could evaluate the accuracy of these databases and suggest means to improve it. The PoP-PoP edges, which are extracted from the traceroutes, present a fairly rich AS-AS connectivity map.
 
Conference Paper
This paper analyzes a communication network with heterogeneous customers. We investigate priority queueing as a way to differentiate between these users. Customers join the network as long as their utility (which is a function of the queueing delay) is larger than the price of the service. We focus on the specific situation in which two types of users play a role: one type is delay-sensitive ('voice'), whereas the other is delay-tolerant ('data'); these preferences are reflected in their utility curves. Two models are considered: in the first the network determines the priority class of the users, whereas the second model leaves this choice to the users. For both models we determine the prices that maximize the provider's profit. Importantly, these situations do not coincide. Our study uses elements from queueing theory, but also from microeconomics and game theory (e.g., the concept of a Nash equilibrium). We conclude the paper by considering a model in which throughput (rather than delay) is the main performance measure. Again the pricing strategy exploits the heterogeneity in required service and willingness-to-pay.
 
Conference Paper
The volume of multimedia data, including video, served through peer-to-peer (P2P) networks is growing rapidly. Unfortunately, high bandwidth transfer rates are rarely available to P2P clients on a consistent basis, making it difficult to use P2P networks to stream video for on-line viewing. In this paper, we develop and evaluate on-line algorithms that coordinate the pre-fetching of scalably-coded variable bitrate video. These algorithms are ideal for P2P environments in that they require no knowledge of the future variability or availability of bandwidth, yet produce a playback whose average rate and variability are comparable to the best off-line prefetching algorithms that have total future knowledge. To show this, we develop an off-line algorithm that provably optimizes quality and variability metrics. Using simulations based on actual P2P traces, we compare our on-line algorithms to the optimal off-line algorithm and find that our novel on-line algorithms exhibit near-optimal performance and significantly outperform more traditional pre-fetching methods.
 
Conference Paper
Presents an efficient and accurate analytical model for the radio interface of the General Packet Radio Service (GPRS) in a GSM network. The model is utilized for investigating how many packet data channels should be allocated for GPRS under a given amount of traffic in order to guarantee appropriate quality of service. The presented model constitutes a continuous-time Markov chain. The Markov model represents the sharing of radio channels by circuit-switched GSM connections and packet-switched GPRS sessions under a dynamic channel allocation scheme. In contrast to previous work, the Markov model explicitly represents the mobility of users by taking into account arrivals of new GSM and GPRS users as well as handovers from neighboring cells. To validate the simplifications necessary for making the Markov model amenable to numerical solution, we provide a comparison of the results of the Markov model with a detailed simulator on the IP level
 
Conference Paper
Scheduling has been an interesting problem since its inception. In the context of real-time networks, a scheduling algorithm is concerned with dispatching streams of packets sharing the same bandwidth such that certain guaranteed performance for each stream like rate and delay bound is provided. This function has a wide range of applications in network elements such as host adaptors, routers and switches. This paper proposes and describes a new scheduling algorithm named as recursive round robin (RRR) scheduler for scheduling fixed sized packets. It is based on the concept of the construction of a scheduling tree in which distinct cell streams are scheduled recursively. Special emphasis is placed on the design and analysis of the scheduler. A delay bound is analytically derived for the scheduler and verified using simulation. It is shown that the work conserving version of the scheduler is fair. Fairness indexes for the work conserving scheduler are analytically derived. The simple nature of this algorithm makes it possible to implement it at very high speeds, while considerably reducing the implementation cost
 
Conference Paper
Adding programmability to the interior of the network provides an infrastructure for distributed applications. Specifically, network management and control applications require access to and control of network device state. For example, a routing load balancing application may require access to the routing table, and a congestion avoidance application may require interface congestion information. There are fundemental problems associated with this interaction. In this paper, we study the basic tradeoffs associated with the interaction between an active process and its environment and presenting ABLE++ as an example architecture. Most notably, we explore two design trade-offs, efficiency vs. abstraction and application flexibility vs. security. We demonstrate the advantages of the architecture by implementing a congestion avoidance algorithm.
 
Article
Adding programmability to the interior of the network provides an infrastructure for distributed applications. Specifically, network management (NM) and control applications require access to and control of network device state. For example, a routing load balancing application may require access to the routing table, and a congestion avoidance application may require interface congestion information. There are fundamental problems associated with this interaction that are apparent in current technologies. In this paper, the basic tradeoffs associated with the interaction between an active process and its environment and presenting ABLE++ as an example architecture is studied. Most notably, two design tradeoffs, efficiency vs. abstraction and application flexibility vs. security are explored. The advantages of the architecture by implementing a congestion avoidance algorithm are demonstrated.
 
Article
The HTTP/1.1 protocol is the result of four years of discussion and debate among a broad group of Web researchers and developers. It improves upon its phenomenally successful predecessor, HTTP/1.0, in numerous ways. We discuss the differences between HTTP/1.0 and HTTP/1.1, as well as some of the rationale behind these changes.
 
Article
This paper presents the design and implementation of the NCTUns 1.0 network simulator, which is a high-fidelity and extensible network simulator capable of simulating both wired and wireless IP networks. By using an enhanced simulation methodology, a new simulation engine architecture, and a distributed and open-system architecture, the NCTUns 1.0 network simulator is much more powerful than its predecessor––the Harvard network simulator, which was released to the public in 1999. The NCTUns 1.0 network simulator consists of many components. In this paper, we will present the design and implementation of these components and their interactions in detail.
 
Article
This paper addresses the implementation of a pan-European network to support co-operative research amongst European researchers: TEN-155. Predecessors to TEN-155 were TEN-34 and EuropaNet. TEN-155 supersedes these two networks not only in terms of capacity offered but also in terms of the Managed Bandwidth Service (MBS) offered. Besides providing a high speed IP service, the purpose of TEN-155 is also to support research in networking by providing an international test bed for advanced networking technologies (the Quantum Test Programme) and by providing VPNs, with dedicated and guaranteed bandwidth, for specific research projects in countries connected to TEN-155. This paper will illustrate the use of ATM technology to support the MBS and the Quantum Test Programme in co-existence with the standard best efforts IP service and experiences gained from offering a pan-European Managed Bandwidth Service.
 
Article
Eight sites participated in the second Defense Advanced Research Projects Agency (DARPA) off-line intrusion detection evaluation in 1999. A test bed generated live background traffic similar to that on a government site containing hundreds of users on thousands of hosts. More than 200 instances of 58 attack types were launched against victim UNIX and Windows NT hosts in three weeks of training data and two weeks of test data. False-alarm rates were low (less than 10 per day). The best detection was provided by network-based systems for old probe and old denial-of-service (DoS) attacks and by host-based systems for Solaris user-to-root (U2R) attacks. The best overall performance would have been provided by a combined system that used both host- and network-based intrusion detection. Detection accuracy was poor for previously unseen, new, stealthy and Windows NT attacks. Ten of the 58 attack types were completely missed by all systems. Systems missed attacks because signatures for old attacks did not generalize to new attacks, auditing was not available on all hosts, and protocols and TCP services were not analyzed at all or to the depth required. Promising capabilities were demonstrated by host-based systems, anomaly detection systems and a system that performs forensic analysis on file system data.
 
Article
SMIL is the W3C recommendation for bringing synchronized multimedia to the Web. Version 1.0 of SMIL was accepted as a recommendation in June 1998. Work is expected to be soon underway for preparing the next version of SMIL, version 2.0. Issues that will need to be addressed in developing version 2.0 include not just adding new features but also establishing SMIL's relationship with various related existing and developing W3C efforts. In this paper we offer some suggestions for how to address these issues. Potential new constructs with additional features for SMIL 2.0 are presented. Other W3C efforts and their potential relationship with SMIL 2.0 are discussed. To provide a context for discussing these issues, this paper explores various approaches for integrating multimedia information with the World Wide Web. It focuses on the modeling issues on the document level and the consequences of the basic differences between text-oriented Web-pages and networked multimedia presentations.
 
Article
Specification and description language (SDL) is the premier language for specification, design and development of real time systems, and in particular for telecommunications software. SDL-2000 became the international standard in force in November 1999, replacing the previous version. This paper gives an overview of SDL development in ITU-T up to the end of 1999. It covers a short history of SDL including details of the updates to SDL for SDL-2000. The paper fills a gap between previously published tutorials and the current SDL standard by providing notes on the updates. Plans for the further development of SDL in the 21st Century and the role of the SDL Forum Society are briefly considered.
 
Article
In November 1999, the current version of specification and description language (SDL), commonly referred to as SDL-2000, passed through ITU-T. In November 2000, the formal semantics of SDL- 2000 was officially approved to become part of the SDL language definition. It covers both the static and the dynamic semantics, and is based on the formalism of abstract state machines (ASMs). To support executability, the formal semantics defines, for each SDL specification, reference ASM code, which enables an SDL-to-ASM-compiler.In this paper, we briefly survey and compare existing approaches to define the semantics of SDL formally. The ITU-T approach is then outlined in more detail, addressing the following steps: (1) mapping of non-basic language constructs to the core language, (2) checking of static semantics conditions, (3) definition of the SDL abstract machine (SAM), and (4) definition of the SDL virtual machine (SVM). The paper concludes with experiences from the SDL-to-ASM-compiler project. It is proposed that the SDL-2000 semantics can be adapted and extended to formally define the meaning of UML 2.0 class, composite structure, and statechart diagrams.
 
Article
Wireless Internet access based on wireless LAN or Bluetooth networks will become popular within the next few years. New services will be offered using location and personal profiles to filter Internet access. Moreover, “Push” services will allow unsolicited event-based information transfer depending on user configurations. We have introduced such services as “The Mobile Fairguide” during CeBIT 2001 in a Bluetooth network that covered a full hall of 25,000 m2 with 130 base stations. We were able to show that Bluetooth is a usable technology for such applications, especially with PDAs as terminal devices. Moreover, we tested our architecture in a live scenario with respect to scalability and mobility.
 
Article
Communications and imaging experiments conducted in the Arizona desert during July of 2002 with the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS) helped to identify a fundamental suite of scientific instruments focused on surface composition and temperature determination for the calibration and validation of NASA and USGS spaceborne and airborne sensors and to integrate them with a hybrid mobile wireless and satellite network for lunar and planetary exploration and emergency response. The 2002 experiment focused on the exchange of remotely sensed and ground truth geographic information between analysts and field scientists. That experiment revealed several modifications that would enhance the performance and effectiveness of geographic information networks (GIN) for lunar and planetary exploration and emergency response. Phase 2 experiments conducted during June 2003 at the USGS Earth Resources and Observation Systems (EROS) Data Center’s geologic imaging test site near Dinosaur National Monument in the NE Utah desert incorporated several of the lessons learned from the 2002 experiment and successfully added five major new components: (1) near-real-time hyperspectral and multispectral satellite image acquisition, (2) remotely controlled and coordinated mobile real-time ground sensor measurements during the imaging satellite overpass, (3) long-delay optimized Transmission Control Protocol/Internet Protocol TCP/IP protocols to improve network performance over geosynchronous communications satellite circuits, (4) distributed, multinode parallel computing on NASA’s Internet Power GRID (IPG), and (5) near-real-time validation of satellite imagery as part of a successful test of the NASA–USGS National Emergency Mapping Information System.
 
Article
The growth of packet-based voice services is leading to integration of voice and other services over packet switched data networks. This paper explores a possible path that the telephone service industry may follow as this integration is accelerated by technological advances that improve the capabilities of packet-based services while reducing their costs.
 
Article
Wireless sensor networks often consists of a large number of low-cost sensor nodes that have strictly limited sensing, computation, and communication capabilities. Due to resource restricted sensor nodes, it is important to minimize the amount of data transmission so that the average sensor lifetime and the overall bandwidth utilization are improved. Data aggregation is the process of summarizing and combining sensor data in order to reduce the amount of data transmission in the network. As wireless sensor networks are usually deployed in remote and hostile environments to transmit sensitive information, sensor nodes are prone to node compromise attacks and security issues such as data confidentiality and integrity are extremely important. Hence, wireless sensor network protocols, e.g., data aggregation protocol, must be designed with security in mind. This paper investigates the relationship between security and data aggregation process in wireless sensor networks. A taxonomy of secure data aggregation protocols is given by surveying the current “state-of-the-art” work in this area. In addition, based on the existing research, the open research areas and future research directions in secure data aggregation concept are provided.
 
Article
Many analytical and simulation-based studies of TCP performance in wireless environments assume an error-free and congestion-free reverse channel that has the same capacity as the forward channel. Such an assumption does not hold in many real-world scenarios, particularly in the hybrid networks consisting of various wireless LAN (WLAN) and cellular technologies. In this paper, we first study, through extensive simulations, the performance characteristics of four representative TCP schemes, namely TCP New Reno, SACK, Veno, and Westwood, under the network conditions of asymmetric end-to-end link capacities, correlated wireless errors, and link congestion in both forward and reverse directions. We then propose a new TCP scheme, called TCP New Jersey, which is capable of distinguishing wireless packet losses from congestion packet losses, and reacting accordingly. TCP New Jersey consists of two key components, the timestamp-based available bandwidth estimation (TABE) algorithm and the congestion warning (CW) router configuration. TABE is a TCP-sender-side algorithm that continuously estimates the bandwidth available to the connection and guides the sender to adjust its transmission rate when the network becomes congested. TABE is immune to the ACK drops as well as ACK compression. CW is a configuration of network routers such that routers alert end stations by marking all packets when there is a sign of an incipient congestion. The marking of packets by the CW-configured routers helps the sender of the TCP connection to effectively differentiate packet losses caused by network congestion from those caused by wireless link errors. Our simulation results show that TCP New Jersey is able to accurately estimate the available bandwidth of the bottleneck link of an end-to-end path; and the TABE estimator is immune to link asymmetry, bi-directional congestion, and the relative position of the bottleneck link in the multi-hop end-to-end path. The proactive congestion avoidance control mechanism proposed in our scheme minimizes the network congestion, reduces the network volatility, and stabilizes the queue lengths while achieving more throughput than other TCP schemes.
 
Article
We describe Bro, a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder's traffic transits. We give an overview of the system's design, which emphasizes high-speed (FDDI-rate) monitoring, real-time notification, clear separation between mechanism and policy, and extensibility. To achieve these ends, Bro is divided into an `event engine' that reduces a kernel-filtered network traffic stream into a series of higher-level events, and a `policy script interpreter' that interprets event handlers written in a specialized language used to express a site's security policy. Event handlers can update state information, synthesize new events, record information to disk, and generate real-time notifications via syslog. We also discuss a number of attacks that attempt to subvert passive monitoring systems and defenses against these, and give particulars of how Bro analyzes the six applications integrated into it so far: Finger, FTP, Portmapper, Ident, Telnet and Rlogin. The system is publicly available in source code form.
 
Article
The recently adopted H.264 standard achieves efficient video encoding and bandwidth savings. Thus, designing communication protocols and QoS control mechanisms for H.264 video distribution over wireless IP networks is a topic of intense research interest. Delivering video streams to terminals via a wireless last hop is indeed a challenging task due to the varying nature of the wireless link. While a common approach suggests exploiting the variations of the wireless channel, an alternative is to exploit characteristics of the video stream to improve the transmission. In this paper, we combine both approaches through an efficient wireless loss characterization and a low-delay unequal interleaved FEC protection. Besides deriving new QoS metrics for FEC block allocation, the wireless loss characterization is as well used to adjust the interleaving level depending on the loss correlation exhibited by the wireless channel. This novel unequal interleaved FEC (UI-FEC) protocol allows graceful video quality degradation over error-prone wireless links while minimizing the overall bandwidth consumption and the end-to-end latency.
 
Article
Monitoring network performance and status is a fundamental task for network operators as it directly impacts the quality of the offered services and hence user satisfaction. For this purpose a consolidated approach, which is largely adopted by network operators, is based on the so-called KPIs (key performance indicators). In this paper, we propose and discuss a set of KPIs to monitor network performance of the new HSDPA enhanced UMTS infrastructure. KPI statistics are collected and analysed from the novel HSDPA network of H3G, one of the major Italian mobile network operators.
 
Article
The AXD 301 is a new, multiservice, carrier-class ATM switch from Ericsson that can be used in several positions in a network. In its initial release, the AXD 301 is scalable from 10 Gbit/s – in one subrack – up to 160 Gbit/s. The AXD 301 supports every service category defined for ATM, as well as integrated support for IPand voice. An advanced buffering mechanism allows services to be mixed without compromising quality. Designed for non-stop operation, the AXD 301 incorporates duplicate hardware and software modularity, which enables individual modules to be upgraded without disturbing traffic. The switching system, which supports both ATM Forum and ITU-T signaling, is easily managed using an embedded Web-based management system.
 
Article
The effective support of teleteaching services requires the development of multimedia collaboration systems that are capable of providing real-time and high quality audio-visual communication among distributed instructors and students. In the absence of such specialised systems, technologies tailored to other services are being considered for teleteaching services as well. Such a technology is the H.323 audio-visual communication technology developed to support video communication over IP. Although teleteaching and videoconferencing have similar QoS requirements, teleteaching functional requirements are a superset of those of videoconferencing. In this paper, the suitability of H.323 technology and currently available products to support teleteaching services is investigated, based on experience gained during a related deployment at the University of Athens.
 
Article
The Telecommunication Sector of the International Telecommunication Union (ITU-T) has developed a series of recommendations together comprising the H.323 system that provides for multimedia communications in packet-based (inter)networks. This series of recommendations describe the types and functions of H.323 terminals and other H.323 devices as well as their interactions. The H.323 series of recommendations includes audio, video and data streams, but an H.323 system minimally requires only an audio stream to be supported. Motivated by straightforward interoperability with the ISDN and PSTN networks and a variety of other protocols, the recommendation H.323 has been accepted as being the standard for IP telephony, developed by the ITU-T and broadly backed by the industry—which is also adopted by both the Voice over IP (VoIP) forum and the European Telecommunication Standards Institute (ETSI). This paper presents an overview of the H.323 system architecture with all its functional components and protocols and points out all the related specifications.
 
Top-cited authors
E. Cayirci
  • University of Stavanger (UiS)
Shantidev Mohanty
Won-Yeol Lee
  • KT (Korea Telecom)
Dipak Ghosal
  • University of California, Davis
Andrei Z. Broder
  • Google Inc.