Article

Router-Based Algorithms for Improving Internet Quality of Service

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Building upon [8,2] and [18] we propose a new flowaware active queue management packet dropping scheme (MarkMax). The main idea behind MarkMax is to identify which connection should reduce its sending rate instead of which packets should be dropped. ...
... When AQM was first introduced in the 1990s it was unfeasible to classify incoming packets in real time for high speed links but with technological advances this is now possible. Furthermore, to reduce the numbers of flows that need to be tracked, it is possible to concentrate on the larger flows using the heavy-hitter counters of [18] to identify large flows. Then, according to [6] we suggest to treat short flows with priority and mark large flows which have the largest backlog during the congestion moments. ...
Conference Paper
Full-text available
We introduce MarkMax a new flow-aware active queue management algorithm for additive increase multiplicative decreases protocols (like TCP). MarkMax sends a congestion signal to a selected connection whenever the total back-log reaches a given threshold. The selection mechanism is based on the state of large flows. Using a fluid model we derive some bounds that can be used to analyze the behavior of MarkMax and we compute the per-flow backlog. We conclude the paper with simulation results, using NS-2, comparing MarkMax with drop tail and showing how MarkMax improves both the fairness and link utilization when connections have significantly different round trip times.
... Specifically, by exploiting an analogy with packets in a communication network, and an item to be recycled, it may be possible to reuse some pricing strategies developed in this context to ensure fair pricing. We anticipate ideas, originally developed in the context of router design [10], will be of great utility in exploring this direction. ...
Chapter
Full-text available
In this paper, we describe a distributed ledger re-cycling system to encourage responsible disposal of paper cups. A complete working prototype is described. Real measurements are presented to illustrate the potential suitability of the IOTA based distributed ledger for this application.
... [48,163,164,189] assume that future charging facilities will be capable of regulating EV charge rates continuously, few works consider the more realistic situation of EV chargers that support only on-off charging functionality. Some recently proposed results on solving this problem can be seen in [23,167,187,192]. In particular, the authors [192] formulated this problem using Markov chain theory and proposed a distributed charging algorithm to maximise the utilisation of the available power via on-off charging control. ...
Thesis
Full-text available
In this thesis, we mainly discuss four topics on Electric Vehicles (EVs) in the context of smart grid and smart transportation systems. Topic 1 focuses on investigating the impacts of different EV charging strategies on the grid. Topic 2 is s concerned with the applications of EVs with Vehicle-to-Grid (V2G) capabilities. Topic 3 discusses an optimal distributed energy management strategy for power generation in a microgrid scenario. Topic 4 focuses on a new design of the Speed Advisory System (SAS) for optimizing both conventional and electric vehicle networks.
Article
We present a new approach to regulate traffic-related pollution in urban environments by utilizing hybrid vehicles. To do this, we orchestrate the way that each vehicle in a large fleet combines its two engines based on simple communication signals from a central infrastructure. Our approach can be viewed both as a control algorithm and as an optimization algorithm. The primary goal is to regulate emissions, and we discuss a number of control strategies to achieve this goal. Second, we want to allocate the available pollution budget in a fair way among the participating vehicles; again, we explore several different notions of fairness that can be achieved. The efficacy of our approach is exemplified both by the construction of a proof-of-concept vehicle and by extensive simulations, and is verified by mathematical analysis.
Article
Full-text available
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning. In this way Available bandwidth estimation is a vital component of admission control for quality-of-service (QoS) on Internet in the world. In coming years, Optical networks are come to dominate the access network space. Ethernet passive optical networks, which influence the all of subscriber locations of Ethernet, seems bound for success in the optical access network. In this survey, first I prepare an introduction to Ethernet passive optical networks structure. Then related to two categories of bandwidth allocation methods as Static and Dynamic, I make a framework for classifying bandwidth allocation methods in three categories as Fix, Router-Based and Windows-Based. So I provide a Survey on these three groups' bandwidth allocation methods by focus on problems and best solutions that have been submitted till now.
Article
Full-text available
Hash tables are fundamental components of several network processing algorithms and applications, including route lookup, packet classification, per-flow state management and network monitoring. These applications, which typically occur in the data-path of high-speed routers, must process and forward packets with little or no buffer, making it important to maintain wire-speed throughout. A poorly designed hash table can critically affect the worst-case throughput of an application, since the number of memory accesses required for each lookup can vary. Hence, high throughput applications require hash tables with more predictable worst-case lookup performance. While published papers often assume that hash table lookups take constant time, there is significant variation in the number of items that must be accessed in a typical hash table search, leading to search times that vary by a factor of four or more.We present a novel hash table data structure and lookup algorithm which improves the performance over a naive hash table by reducing the number of memory accesses needed for the most time-consuming lookups. This allows designers to achieve higher lookup performance for a given memory bandwidth, without requiring large amounts of buffering in front of the lookup engine. Our algorithm extends the multiple-hashing Bloom Filter data structure to support exact matches and exploits recent advances in embedded memory technology. Through a combination of analysis and simulations we show that our algorithm is significantly faster than a naive hash table using the same amount of memory, hence it can support better throughput for router applications that use hash tables.
Article
Full-text available
We give a survey of some results within the convergence theory for iterated random functions with an emphasis on the question of uniqueness of invariant proba- bility measures for place-dependent random iterations with nitely many maps. Some problems for future research are pointed out.
Article
Full-text available
In order to stem the increasing packet loss rates caused by an exponential increase in network traffic, the (SFB), a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information.
Article
Full-text available
Despite its well-known advantages, per-flow fair queueing has not been deployed in the Internet mainly because of the common belief that such scheduling is not scalable. The objective of the present paper is to demonstrate using trace simulations and analytical evaluations that this belief is misguided. We show that although the number of flows in progress increases with link speed, the number that needs scheduling at any moment is largely independent of this rate. The number of such active flows is a random process typically measured in hundreds even though there may be tens of thousands of flows in progress. The simulations are performed using traces from commercial and research networks with quite different traffic characteristics. Analysis is based on models for balanced fair statistical bandwidth sharing and applies properties of queue busy periods to explain the observed behaviour.
Conference Paper
Full-text available
We study an optimal choice of the buffer size in the Internet routers. The objective is to determine the minimum value of the buffer size required in order to fully utilize the link capacity. The reare some empirical rules for the choice of the buffer size. The most known rule of thumb states that the buffer length should be set to the bandwidth delay product of the network, i.e., the product between the average round trip time in the network and the capacity of the bottleneck link. Several recent works have suggested that as a consequence of the traffic aggregation, the buffer size should be set to smaller values. In this paper we propose an analytical framework for the optimal choice of the router buffer size. We formulate this problem as a multi-criteria optimization problem, in which the Lagrange function corresponds to a linear combination of the average sending rate and average delay in the queue. The solution to this optimization problem provides further evidence that indeed the buffer size should be reduced in the presence of traffic aggregation. Furthermore, our result states that the minimum required buffer is smaller than what previous studies suggested. Our analytical results are confirmed by simulations performed with the NS simulator.
Conference Paper
Full-text available
In traffic monitoring, accounting, and network anomaly detection, it is often important to be able to detect high-volume traffic clusters in near real-time. Such heavy-hitter traffic clusters are often hierarchical (ie, they may occur at different aggregation levels like ranges of IP addresses) and possibly multidimensional (ie, they may involve the combination of different IP header fields like IP addresses, port numbers, and protocol). Without prior knowledge about the precise structures of such traffic clusters, a naive approach would require the monitoring system to examine all possible ombinations of aggregates in order to detect the heavy hitters, which can be proohibitive in terms of computation resources. In this paper, we focus on online identification of 1-dimensional and 2-dimensional hierarchical heavy hitters (HHHs), arguably the two most important scenarios in traffic analysis. We show that the problem of HHH detection can be transformed to one of dynamic packet classification by taking a top-down approach and adaptively creating new rules to match HHHs. We then adapt several existing static packet classification algorithms to support dynamic packet classification. The resulting HHH detection algorithms have much lower worst-case update costs than existing algorithms and can provide tunable deterministic accuracy guarantees. As an application of these algorithms, we also propose robust techniques to detect changes among heavy-hitter traffic clusters. Our techniques can accommodate variability due to sampling that is increasingly used in network measurement. Evaluation based on real Internet traces collected at a Tier-1 ISP suggests that these techniques are remarkably accurate and efficient.
Conference Paper
Full-text available
In this paper, we present a model for TCP/IP flow control mechanism. The rate at which data is transmitted increases linearly in time until a packet loss is detected. At that point, the transmission rate is divided by a constant factor. Losses are generated by some exogenous random process which is only assumed to be stationary. This allows us to account for any correlation and any distribution of inter-loss times. We obtain an explicit expression for the throughput of a TCP connection and bounds on the throughput when there is a limit on the congestion window size. In addition, we study the effect of the TimeOut mechanism on the throughput. A set of experiments is conducted over the real Internet and a comparison is provided with other models which make simple assumptions on the inter-loss time process.
Article
The additive-increase multiplicative-decrease (AIMD) schemes designed to control congestion in communication networks are investigated from a probabilistic point of view. Functional limit theorems for a general class of Markov processes that describe these algorithms are obtained. The asymptotic behaviour of the corresponding invariant measures is described in terms of the limiting Markov processes. For some special important cases, including TCP congestion avoidance, an important autoregressive property is proved. As a consequence, the explicit expression of the related invariant probabilities is derived. The transient behaviour of these algorithms is also analysed.
Article
This article reviews the current transmission control protocol (TCP) congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure. We also look at the dynamics of TCP and employ linear models to exhibit stability limitations in the predominant TCP versions, despite certain built-in compensations for delay. Finally, we present a new protocol that overcomes these limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.
Article
There are many mathematics textbooks on real analysis, but they focus on topics not readily helpful for studying economic theory or they are inaccessible to most graduate students of economics.Real Analysis with Economic Applicationsaims to fill this gap by providing an ideal textbook and reference on real analysis tailored specifically to the concerns of such students.The emphasis throughout is on topics directly relevant to economic theory. In addition to addressing the usual topics of real analysis, this book discusses the elements of order theory, convex analysis, optimization, correspondences, linear and nonlinear functional analysis, fixed-point theory, dynamic programming, and calculus of variations. Efe Ok complements the mathematical development with applications that provide concise introductions to various topics from economic theory, including individual decision theory and games, welfare economics, information theory, general equilibrium and finance, and intertemporal economics. Moreover, apart from direct applications to economic theory, his book includes numerous fixed point theorems and applications to functional equations and optimization theory.The book is rigorous, but accessible to those who are relatively new to the ways of real analysis. The formal exposition is accompanied by discussions that describe the basic ideas in relatively heuristic terms, and by more than 1,000 exercises of varying difficulty.This book will be an indispensable resource in courses on mathematics for economists and as a reference for graduate students working on economic theory.
Conference Paper
Accurate network traffic measurement is required for accounting, bandwidth provisioning, and detecting DOS attacks. However, keeping a counter to measure the traffic sent by each of a million concurrent flows is too expensive (using SRAM) or slow (using DRAM). The current state-of-the-art (e.g., Cisco NetFlow) methods which count periodically sampled packets are slow, inaccurate, and memory-intensive. Our paper introduces a paradigm shift by concentrating on the problem of measuring only "heavy" flows --- i.e., flows whose traffic is above some threshold such as 1% of the link. After showing that a number of simple solutions based on cached counters and classical sampling do not work, we describe two novel and scalable schemes for this purpose which take a constant number of memory references per packet and use a small amount of memory. Further, unlike NetFlow estimates, we have provable bounds on the accuracy of measured rates and the probability of false negatives. We also propose a new form of accounting called threshold accounting in which only flows above threshold are charge by usage while the rest are charged a fixed fee. Threshold accounting generalizes the familiar notions of usage-based and duration based pricing.
Article
We present a simplied model of a network of TCP-like sources that compete for a shared bandwidth. We show that: (i) networks of communicating devices operating AIMD congestion control algorithms may be modelled as a positive linear system; (ii) that such networks possess a unique stationary point; and (iii) that this stationary point is globally exponentially stable. Using these results we establish conditions for the fair co-existence of trac in networks employing heterogeneous AIMD algorithms. A new protocol for operation over high-speed links is proposed and its dynamic properties discussed as a positive linear system.
Article
We derive decentralized and scalable stability conditions for a fluid approximation of a class of Internet-like communications networks operating a modified form of TCP-like congestion control. The network consists of an arbitrary interconnection of sources and links with heterogeneous propagation delays. The model here allows for arbitrary concave utility functions and the presence of dynamics at both the sources and the links.
Article
Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queueing delays. Recently, several papers have suggested that it may, under general circumstances, be possible to achieve high utilisation with small network buffers. In this paper we review these recommendations. A number of issues are reported that question the utility of these recommendations.
Article
This report concentrates on specific requirements and goals of the research networks supported by ANSNET, but applies to any TCP dominated high speed WAN and in particular those striving to support high speed end-to-end flows. Measurements have been made under conditions intended to better understand performance barriers imposed by network equipment queueing capacities and queue drop strategies.The IBM RS/6000 based routers currently supporting ANSNET performed very well in these tests. Measurements have been made with the current software and performance enhanced software. Single TCP flows are able to achieve 40 Mb/s and competing multiple TCP flows achieve over 41 Mb/s link utilization on 44.7 Mb/s DS3 links with delays comparable to US cross continent ANSNET delays. Congestion collapse is demonstrated with intentionally reduced queueing capacity and using window sizes much larger than optimal.A variation of Floyd and Jacobson's Random Early Detection (RED) algorithm [1] is tested. Performance improved with the use of RED for tests involving multiple flows. With RED and queueing capacity at or above the delay bandwidth product, congestion collapse is avoided, allowing the maximum window size to safely be set arbitrarily high.Queueing capacity greater than or equal to the delay bandwidth product and RED are recommended. RED provides performance improvement in all but the single flow case, but cannot substitute for adequate queueing capacity, particularly if high speed flows are to be supported.
Article
In this paper we address the problem of real-time identi-fication of rate-heavy-hitters on very high speed links, us-ing small amounts of SRAM memory. Assuming a nonuni-form distribution of flow rates, runs (where two randomly selected bytes belong to the same flow) occur predominantly in the heaviest flows. Based on this observation we present RHE (Realtime Heavy-hitter Estimator), a measurement tool which uses a small amount of memory for extracting the heaviest flows. Using this tool a queue management algorithm called HH (Heavy-hitter Hold) is presented that approximates fair bandwidth allocations. RHE posses very low memory requirements, scales with line speeds of several tens of Gbps and covers a wide range of flow rate distri-butions. Measurements over real Internet traces show the high efficiency of RHE, achieving a high accuracy with a very small amount of SRAM memory. Compared to Estan-Varghese's Multistage Filters and Lucent's CATE estimator, RHE achieves up to 10 times smaller errors in measuring high-rate flows on a number of synthetic traces. Packet level ns2 simulations in a synthetic heavy-tailed environment are presented to illustrate efficacy of HH.
Conference Paper
Most congestion control algorithms try to emulate processor sharing (PS) by giving each competing flow an equal share of a bottleneck link. This approach leads to fairness, and prevents long flows from hogging resources. For example, if a set of flows with the same round trip time share a bottleneck link, TCP’s congestion control mechanism tries to achieve PS; so do most of the proposed alternatives, such as eXplicit Control Protocol (XCP). But although they emulate PS well in a static scenario when all flows are long-lived, they do not come close to PS when new flows arrive randomly and have a finite amount of data to send, as is the case in today’s Internet. Typically, flows take an order of magnitude longer to complete with TCP or XCP than with PS, suggesting large room for improvement. And so in this paper, we explore how a new congestion control algorithm — Rate Control Protocol (RCP) — comes much closer to emulating PS over a broad range of operating conditions. In RCP, a router assigns a single rate to all flows that pass through it. The router does not keep flow-state, and does no per-packet calculations. Yet we are able to show that under a wide range of traffic characteristics and network conditions, RCP’s performance is very close to ideal processor sharing.
Conference Paper
The need for efficient counter architecture has arisen for the following two reasons. Firstly, a number of data streaming algorithms and network management applications require a large number of counters in order to identify important traffic characteristics. And secondly, at high speeds, current memory devices have significant limitations in terms of speed (DRAM) and size (SRAM). For some applications no information on counters is needed on a per-packet basis and several methods have been proposed to handle this problem with low SRAM memory requirements. However, for a number of applications it is essential to have the counter information on every packet arrival. In this paper we propose two, computationally and memory efficient, randomized algorithms for approximating the counter values. We prove that proposed estimators are unbiased and give variance bounds. A case study on multistage filters (MSF) over the real Internet traces shows a significant improvement by using the active counters architecture.
Article
As the most widely used reliable transport in today's Internet, TCP has been extensively studied in the past decade. However, previous research usually only considers a small or medium number of concurrent TCP connections. The TCP behavior under many competing TCP flows has not been sufficiently explored. In this paper, we use extensive simulations to systematically investigate the performance of a large number of concurrent TCP flows. We start with a simple scenario, in which all the connections have the same roundtrip time (RTT), and the gateways use drop-tail policy. We examine how the aggregate throughput, goodput, and loss rate vary with different underlying topologies. We also look at the behavior of each individual connection when competing with other connections. We observe global synchronization in some cases. We break the synchronization by either adding random processing time or using random early detection (RED) gateways, and examine their effects on the TCP performance. Finally we investigate the TCP performance with different RTTs, and quantify the roundtrip bias using both analysis and simulations.
Article
Convergence of iterative processes in Ck of the form xi+ri=αjix1+r i-1+(1-α)Pjixi, where ji ε{lunate}{1,2,...,n}, i = 1,2,..., is analyzed. It is shown that if the matrices P1,..., Pn are paracontracting in the same smooth, strictly convex norm and if the sequence {ji}∞i = 1 has certain regularity properties, then the above iterates converge. This result implies the convergence of a parallel asynchronous implementation of the algebraic reconstruction technique (ART) algorithm often used in tomographic reconstruction from incomplete data.
Article
High-speed communication networks are characterized by large bandwidth-delay products. This may have an adverse impact on the stability of closed-loop congestion control algorithms. In this paper, classical control theory and Smith’s principle are proposed as key tools for designing an effective and simple congestion control law for high-speed data networks. Mathematical analysis shows that the proposed control law guarantees stability of network queues and full utilization of network links in a general network topology and traffic scenario during both transient and steady-state condition. In particular, no data loss is guaranteed using buffers with any capacity, whereas full utilization of links is ensured using buffers with capacity at least equal to the bandwidth-delay product. The control law is transformed to a discrete-time form and is applied to ATM networks. Moreover a comparison with the ERICA algorithm is carried out. Finally, the control law is transformed to a window form and is applied to Internet. The resulting control law surprisingly reveals that today's Transmission Control Protocol/Internet Protocol implements a Smith predictor for congestion control. This provides a theoretical insight into the congestion control mechanism of TCP/IP along with a method to modify and improve this mechanism in a way that is backward compatible.
Article
We show that the fluid loss ratio in a fluid queue with finite buffer b and constant link capacity c is always a jointly convex function of b and c. This generalizes prior work by Kumaran and Mandjes (Queueing Systems 38 (2001) 471), which shows convexity of the (b,c) trade-off for large number of i.i.d. multiplexed sources, using the large deviations rate function as approximation for fluid loss. Our approach also leads to a simpler proof of the prior result, and provides a stronger basis for optimal measurement-based control of resource allocation in shared resource systems.
Conference Paper
Traffic anomalies such as failures and attacks are commonplace in today's network, and identifying them rapidly and accurately is critical for large network operators. The detection typically treats the traffic as a collection of flows that need to be examined for significant changes in traffic pattern (e.g., volume, number of connections). However, as link speeds and the number of flows increase, keeping per-flow state is either too expensive or too slow. We propose building compact summaries of the traffic data using the notion of sketches. We have designed a variant of the sketch data structure, k-ary sketch, which uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. Its linearity property enables us to summarize traffic at various levels. We then implement a variety of time series forecast models (ARIMA, Holt-Winters, etc.) on top of such summaries and detect significant changes by looking for flows with large forecast errors. We also present heuristics for automatically configuring the model parameters. Using a large amount of real Internet traffic data from an operational tier-1 ISP, we demonstrate that our sketch-based change detection method is highly accurate, and can be implemented at low computation and memory costs. Our preliminary results are promising and hint at the possibility of using our method as a building block for network anomaly detection and traffic measurement.
Conference Paper
Per-flow network traffic measurement is an important component of network traffic management, network performance assessment, and detection of anomalous network events such as incipient DoS attacks. In [1], the authors developed a mechanism called RATE where the focus was on developing a memory efficient scheme for estimating per-flow traffic rates to a specified level of accuracy. The time taken by RATE to estimate the per-flow rates is a function of the specified estimation accuracy and this time is acceptable for several applications. However some applications, such as quickly detecting worm related activity or the tracking of transient traffic, demand faster estimation times. The main contribution of this paper is a new scheme called ACCEL-RATE that, for a specified level of accuracy, can achieve orders of magnitude decrease in per-flow rate estimation times. It achieves this by using a hashing scheme to split the incoming traffic into several sub-streams, estimating the per-flow traffic rates in each of the substreams and then relating it back to the original per-flow traffic rates. We show both theoretically and experimentally that the estimation time of ACCEL-RATE is at least one to two orders of magnitude lower than RATE without any significant increase in the memory size.
Conference Paper
The problem of how to efficiently maintain a large number (say millions) of statistics counters that need to be incremented at very high speed has received considerable research attention recently. This problem arises in a variety of router management algorithms and data streaming algorithms, where a large array of counters is used to track various network statistics and to implement various counting sketches respectively. While fitting these counters entirely in SRAM meets the access speed requirement, a large amount of SRAM may be needed with a typical counter size of 32 or 64 bits, and hence the high cost. Solutions proposed in recent works have used hybrid architectures where small counters in SRAM are incremented at high speed, and occasionally written back ("flushed") to larger counters in DRAM. Previous solutions have used complex schedulers with tree-like or heap data structures to pick which counters in SRAM are about to overflow, and flush them to the corresponding DRAM counters.In this work, we present a novel hybrid SRAM/DRAM counter architecture that consumes much less SRAM and has a much simpler design of the scheduler than previous approaches. We show, in fact, that our design is optimal in the sense that for a given speed difference between SRAM and DRAM, our design uses the theoretically minimum number of bits per counter in SRAM. Our design uses a small write-back buffer (in SRAM) that stores indices of the overflowed counters (to be flushed to DRAM) and an extremely simple randomized algorithm to statistically guarantee that SRAM counters do not overflow in bursts large enough to fill up the write-back buffer even in the worst case. The statistical guarantee of the algorithm is proven using a combination of worst case analysis for characterizing the worst case counter increment sequence and a new tail bound theorem for bounding the probability of filling up the write-back buffer. Experiments with real Internet traffic traces show that the buffer size required in practice is significantly smaller than needed in the worst case.
Conference Paper
Making drop decisions to enforce the max-min fair resource allocation in a network of standard TCP flows without any explicit state information is a challenging problem. Here we propose a solution to this problem by developing a suite of stateless queue management schemes that we refer to as Multi-Level Comparison with index l (MLC(l)). We show analytically, using a Markov chain model, that for an arbitrary network topology of standard TCP flows and queues employing MLC(l), the resource allocation converges to max-min fair as l increases. The analytical findings are verified experimentally using packet level ns2 simulations.
Conference Paper
Understanding the relationship between queueing delays and link utilization for general traffic conditions is an important open problem in networking research. Difficulties in understanding this relationship stem from the fact that it depends on the complex nature of arriving traffic and the problems associated with modelling such traffic. Existing AQM schemes achieve a "low delay" and "high utilization" by responding early to congestion without considering the exact relationship between delay and utilization. However, in the context of exploiting the delay/utilization tradeoff, the optimal choice of a queueing scheme's control parameter depends on the cost associated with the relative importance of queueing delay and utilization. The optimal choice of control parameter is the one that maximizes a benefit that can be defined as the difference between utilization and cost associated with queuing delay. We present a generic algorithm Optimal Delay-Utilization control of t (ODU-t) that is designed with a performance goal of maximizing this benefit. Its novelty lies in fact that it maximizes the benefit in an online manner, without requiring knowledge of the traffic conditions, specific delay-utilization models, nor does it require complex parameter estimation. Moreover, other performance metrics like loss rate or jitter can be directly incorporated into the optimization framework as well. Packet level ns2 simulations are given to demonstrate the behavior of the proposed algorithm.
Conference Paper
Fair queuing is a technique that allows each flow passing through a network device to have a fair share of network resources. Previous schemes for fair queuing that achieved nearly perfect fairness were expensive to implement: specifically, the work required to process a packet in these schemes was O(log(n)), where n is the number of active flows. This is expensive at high speeds. On the other hand, cheaper approximations of fair queuing that have been reported in the literature exhibit unfair behavior. In this paper, we describe a new approximation of fair queuing, that we call Deficit Round Robin. Our scheme achieves nearly perfect fairness in terms of throughput, requires only O(1) work to process a packet, and is simple enough to implement in hardware. Deficit Round Robin is also applicable to other scheduling problems where servicing cannot be broken up into smaller units, and to distributed queues.
Conference Paper
All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.
Conference Paper
Network operators need to determine the composition of the traffic mix on links when looking for dominant applications, users, or estimating traffic matrices. Cisco's NetFlow has evolved into a solution that satisfies this need by reporting flow records that summarize a sample of the traffic traversing the link. But sampled NetFlow has shortcomings that hinder the collection and analysis of traffic data. First, during flooding attacks router memory and network bandwidth consumed by flow records can increase beyond what is available; second, selecting the right static sampling rate is difficult because no single rate gives the right tradeoff of memory use versus accuracy for all traffic mixes; third, the heuristics routers use to decide when a flow is reported are a poor match to most applications that work with time bins; finally, it is impossible to estimate without bias the number of active flows for aggregates with non-TCP traffic.In this paper we propose Adaptive NetFlow, deployable through an update to router software, which addresses many shortcomings of NetFlow by dynamically adapting the sampling rate to achieve robustness without sacrificing accuracy. To enable counting of non-TCP flows, we propose an optional Flow Counting Extension that requires augmenting existing hardware at routers. Both our proposed solutions readily provide descriptions of the traffic of progressively smaller sizes. Transmitting these at progressively higher levels of reliability allows graceful degradation of the accuracy of traffic reports in response to network congestion on the reporting path.
Conference Paper
This paper is a brief description of (i) --(v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [15]. (vii) is described in a soon-to-be-published RFC (ARPANET "Request for Comments")
Conference Paper
We propose two methods to passively measure and monitor changes in round-trip times (RTTs) throughout the lifetime of a TCP connection. Our rst method associates data segments with the acknowl- edgments (ACKs) that trigger them by leveraging the TCP timestamp option. Our second method infers TCP RTT by observing the repeating patterns of segment clusters where the pattern is caused by TCP self- clocking. We evaluate the two methods using both emulated and real Internet tests.
Article
In this paper tools are developed to analyse a recently proposed random matrix model of communication networks that employ additive-increase multiplicative-decrease (AIMD) congestion control algorithms. We investigate properties of the Markov process describing the evolution of the window sizes of network users. Using paracontractivity properties of the matrices involved in the model, it is shown that the process has a unique invariant probability, and the support of this probability is characterized. Based on these results we obtain a weak law of large numbers for the average distribution of resources between the users of a network. This shows that under reasonable assumptions such networks have a well-defined stochastic equilibrium. ns2 simulation results are discussed to validate the obtained formulae. (The simulation program ns2, or network simulator ,i s an industry standard for the simulation of Internet dynamics.)
Article
We study communication networks that employ drop-tail queueing and Additive-Increase Multiplicative-Decrease (AIMD) congestion control algorithms. It is shown that the theory of nonnegative matrices may be employed to model such networks. In particular, important network properties such as: (i) fairness; (ii) rate of convergence; and (iii) throughput; can be characterised by certain non-negative matrices. We demonstrate that these results can be used to develop tools for analysing the behaviour of AIMD communication networks. The accuracy of the models is demonstrated by several NS-studies.