Article

Windowed Ping: An IP layer performance diagnostic

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The Internet is suffering from multiple effects of its rapid growth. Network providers now find themselves in the uncomfortable position of deploying new products and technologies into already-congested environments without adequate tools to assess their performance. In this paper we present a diagnostic tool that provides direct measurement of IP performance, including queue dynamics at or beyond the onset of congestion. It uses a transport style sliding window algorithm combined with ping or traceroute to sustain packet queues in the network. It can directly measure such parameters as throughput, packet loss rates and queue size as functions of packet and window sizes. Other parameters, such as switching time per packet or per byte, can also be derived. The measurements can be performed either in a test bed environment (yielding the most accurate results), on single routers in situ in the Internet, or along specific paths in the production Internet. We will illustrate several measurement techniques.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Many tools are available to measure a variety of metrics, including hop-by-hop bandwidth [29,41,57], the bottleneck or available bandwidth [21,28,43,51], TCP bandwidth [45,58,59], latency [15,35,42,62,69,89], packet losses [77,88], DNS lookup [25,39,67], and aggregate performance of higher-level operations such as a web page download [3,7]. One technique to obtain diverse measurements from end-user perspective is to instrument connections between the client and the server whenever the client requests a service. ...
... These are shown as functions of the window size for a typical link in Figure 3. These plots resemble those generated by "Windowed Ping" (mping) [14], a UDP-based tool that uses a similar measurement algorithm. ...
Conference Paper
This paper describes a tool to diagnose network performance problems commonly affecting TCP-based applications. The tool, pathdiag, runs under a web server framework to provide non-expert network users with one-click diagnostic testing, tuning support and repair instructions. It diagnoses many causes of poor network performance using Web100 statistics and TCP performance models to overcome the lack of otherwise identifiable symptoms.
... A useful estimate could be derived passively from the pattern of loss during TCP slow start. Tools like those described in [12], [13] could also be used. ...
Conference Paper
Full-text available
In this paper, we describe a receiver-based congestion control policy that leverages TCP flow control mechanisms to prioritize mixed traffic loads across access links. We manage queueing at the access link to: (1) improve the response time of interactive network applications; (2) reduce congestion-related packet losses; while (3) maintaining high throughput for bulk-transfer applications. Our policy controls queue length by manipulating receive socket buffer sizes. We have implemented this solution in a dynamically loadable Linux kernel module, and tested it over low-bandwidth links. Our approach yields a 7-fold improvement in packet latency over an unmodified system while maintaining 94% link utilization. In the common case, congestion-related packet losses at the access link can be eliminated. Finally, by prioritizing short flows, we show that our system reduces the time to download a complex Web page during a large background transfer by a factor of two
... Round trip time zing [45] Poisson RTT ally [53] Alias resolution 2 tbit [41] End-host TCP impl. 2 king [21] Estimated RTT 2 nmap [16] End-host services 2 treno [37] TCP b/w # 3 wping [36] TCP b/w # 3 iperf [55] TCP b/w, loss # 3 netperf [29] TCP b/w ...
Article
We present Scriptroute, a system that allows ordinary Internet users to conduct network measurements from remote vantage points. We seek to combine the flexibility found in dedicated measurement testbeds such as NIMI with the general accessibility and popularity of Web-based public traceroute servers. To use Scriptroute, clients use DNS to discover measurement servers and then submit a measurement script for execution in a sandboxed, resource-limited environment. The servers ensure that the script does not expose the network to attack by applying source- and destination-specific filters and security checks, and by rate-limiting traffic.
... ICMP has been previously used by several researchers and system administrators to measure Internet round-trip time (RTT) 4, 10,11,12,15,18]. It was also used by Merit Network Inc to measure internodal latency in the NSFNET T1 backbone 6]. ...
Article
Full-text available
We present the results of a study of Internet round-trip delay. The links chosen include links to frequently accessed commercial hosts as well as well-known academic and foreign hosts. Each link was studied for a 48-hour period. We attempt to answer the following questions: (1) how rapidly and in what manner does the delay change -- in this study, we focus on medium-grain (seconds/minutes) and coarse-grain time-scales (tens of minutes/hours); (2) what does the frequency distribution of delay look like and how rapidly does it change; (3) what is a good metric to characterize the delay for the purpose of adaptation. Our conclusions are: (a) there is large temporal and spatial variation in round-trip time (RTT); (b) RTT distribution is usually unimodal and asymmetric and has a long tail on the right hand side; (c) RTT observations in most time periods are tightly clustered around the mode; (d) the mode is a good characteristic value for RTT distributions; (e) RTT distributions ch...
Article
In TCP/IP communication, the effective throughput as seen from the user end is decreased when the load is increased. When such a service is offered by the network provider, there must be management to realize and maintain effective throughput so as to satisfy the user, while considering the cost. This study proposes a method of analysis for the effective throughput based on the packet monitoring technique, in order to reflect the results of management of the network services. In the proposed method of analysis, several metrics for evaluation are prepared, such as the round-trip time (RTT) and the IP datagram loss ratio, in addition to the effective throughput. The effective throughput performance is analyzed as closely related to those metrics. As an example of evaluation by the proposed method, a TCP/IP network is considered, where multiple WWW clients and a WWW server are connected through low-speed links. The relation between the load and the effective throughput is derived, and the effectiveness of the proposed evaluation method is clearly demonstrated. © 1998 Scripta Technica. Electron Comm Jpn Pt 1, 81(4): 20–30, 1998
Article
Bulk Transfer Capacity (BTC) is a measure of the sustainable data throughput on a network path that is subject to congestion control algorithms, such as those used by the Transmission Control Protocol (TCP). BTC is an important network performance metric because the vast majority of all traffic, typically around 95% of all packets, is conveyed by TCP. The performance of many Internet-based services is thus largely dependant on the BTC of the underlying network path. This paper proposes a novel tool, ReturnFlow6, for estimating the BTC of an IPv6 network path, conducted from a single point to a noninstrumented target. In order to verify the realistic operation of ReturnFlow6, a series of BTC measurements are conducted on a test network and a ?live? Intranet. The results are presented, analyzed and compared against the output of BTC estimation techniques for IPv4, permitting the relative advantages and disadvantages of each approach to be established.
Article
We have developed a scalable network traffic generator and a general computer network benchmark for Unix platforms. This benchmark can be used to evaluate performance of user-level applications which interface directly with the transport layer of TCP/IP running on all types of computer networks. The network workload consists of distributed client/server process pairs (DCSP) and is called the DCSP benchmark. It can include any number and any distribution of communicating client/server pairs, yielding a very high level of flexibility and scalability of network traffic. We propose a standard classification of network workloads, define network performance indicators, and introduce performance measurement methods based on various versions of the DCSP benchmark. We also present experimental results generated using DCSP workloads to compare LAN configurations, study network saturation phenomena, and test WAN communications.
Article
Heute existieren immer mehr elektronische Geräte, die miteinander Daten austauschen sollen. Für diese werden ständig neue Anwendungen und Protokolle entwickelt, um die Integration dieser Geräte voranzutreiben und neue Möglichkeiten der Nutzung zu erschließen. Für viele neue Anwendungen und Protokolle gibt es keine geeigneten Testumgebungen, sei es, weil die die dazugehörige Hardware selbst noch in der Entwicklung ist oder weil der Aufbau der Testumgebung zu kostspielig und der Testbetrieb in Realsystemen zu nicht-reproduzierbaren Ergebnissen führt. Die verteilte Netzwerkemulation ist ein Werkzeug, um für solche Anwendungen spezielle Netzwerkumgebungen zur Leistungsmessung bereitstellen zu können. Dabei werden die realen Anwendungen auf Knoten im Emulationsnetz ausgeführt; die Emulationsumgebung sorgt dann für Netzwerkverbindungen mit speziell auf die Bedürfnisse der Leistungsmessung ausgelegten Eigenschaften. Das NET-Projekt (Network Emulation Testbed) der Abteilung Verteilte Systeme erforscht Möglichkeiten der Leistungsmessung verteilter Anwendungen und Protokolle anhand der verteilten Netzwerkemulation. Um die Skalierbarkeit zu erhöhen, wird eine Virtualisierung der Emulationsknoten durchgeführt, so dass mehrere Testsubjekte auf einem Knoten geführt werden können. Diese Arbeit untersucht die Auswirkungen der Virtualisierung, um Verfälschungen der Testergebnisse bereits während eines Emulationslaufs zu erkennen. Diese Auswirkungen sollen so konkretisiert werden, dass sie von der Emulationsumgebung selbständig überprüft werden können.
Conference Paper
The distribution characteristic of RTT (round-trip time) is an important part of Internet end-to-end behavior characteristics. People have made lots of studies of it, but many conclusions only adapt to the cases of small packet loss rate. We study the RTT characteristics on nearly 40 end-to-end paths in CSTNET and CERNET in China, and draw the following conclusions: (1) the distribution characteristics of RTT and loss rate are dependable; (2) when loss rate is small, RTT distribution is usually unimodal, but with the increase of loss rate, RTT distribution is no longer unimodal and it becomes more and more decentralized; (3) with the increase of loss rate, the occurring times of inherent RTT tend to decrease.
Article
Full-text available
The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways
Article
This paper presents Random Early Detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a preset threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways.
Article
Gateways in very high speed internets will need to have low processing requirements and rapid responses to congestion. This has prompted a study of the performance of the Random Drop algorithm for congestion recovery. It was measured in experiments involving local and long distance traffic using multiple gateways. For the most part, Random Drop did not improve the congestion recovery behavior of the gateways A surprising result was that its performance was worse in a topology with a single gateway bottleneck than in those with multiple bottlenecks. The experiments also showed that local traffic is affected by events at distant gateways.
Conference Paper
This paper is a brief description of (i) --(v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [15]. (vii) is described in a soon-to-be-published RFC (ARPANET "Request for Comments")
  • J Postel
J. Postel, \Internet Protocol," RFC791, USC/Information Sciences Institute, September 1981. Author Information
  • S Bradner
  • Routers Bridges
S. Bradner, \Ethernet Bridges and Routers," Data Communications, Feburary 1992.
Presention at Interop in An electronic version is available via gopher
  • S Bradner
  • Routers Bridges
S. Bradner, \Ethernet Bridges and Routers," Presention at Interop in August 1993, (no proceedings). An electronic version is available via gopher://ndtlgopher.harvard.edu/11/ndtl/results.
Documentation and source available via ftp from ftp.ee.lbl.gov
  • V Jacobson
Ethernet bridges and routers, Presentation at Interop in August 1993 (no Proceedings). An electronic version is available via gopher://ndtlgopher
  • S Bradner