Conference Paper

Analyzing the impact of bufferbloat on latency-sensitive applications

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Delay-sensitive applications, such as live and in- teractive video, are mainstream in today’s Internet, and set to increase with the emergence of web-based video conference. For such applications quality cannot be captured solely by a flow’s throughput: interactive video needs to avoid fluctuations in both visual quality and delay. This is far more valuable than achieving a fleeting increase in throughput. Emerging real-time protocols are being designed with these goals in mind, but it is important to evaluate these methods when sharing the network with real- world traffic. Much of today’s Internet traffic uses TCP CUBIC, we therefore quantify its impact on interactive video experience (visual quality and delay): we measure the degradation imposed to interactive video when sharing a network bottleneck with CUBIC traffic. To understand this impact, we compare performance when two delay-based congestion control schemes are used, one for interactive video and another for file transfer and show that these algorithms can assure good video experience and appropriate download time to their respective users. In contrast, we suggest that achieving such a “low-delay coexistence” with TCP CUBIC would require use of Active Queue Management (AQM) techniques. The paper therefore provides quantitative evidence that AQM can force loss-based TCP and delay-sensitive flows to reach a stable equilibrium point that is similar to the one naturally achieved when TCP flows are governed by delay-based mechanisms.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The DDoS attack creates congestion through a malicious flow that causes most of the original packets to be dropped or rejected on the way without reaching the destination. Active queue management algorithms have been developed to protect networks with congestion notifications, to solve possible problems in the network and to remove the network from congestion [2]. DDoS attacks damage network connections, causing the network to break, slow down or become disabled. ...
Chapter
Full-text available
The networks used in many areas such as location-based services, robotics, smart building assessment, smart water management, smart mobile learning, medical image analysis and processing, wearable technologies have to deal with various security problems. Active queue management algorithms are used to manage network resources and solve problems in the network. DDoS attacks prevent the effective use of network resources and cause network services to be disrupted or dropout. In this study, we classify the performance of queue management algorithms such as RED, SRED, BLUE, SFB, REM and CoDel under DDoS attacks according to delay, throughput, jitter, fairness index values. As a result of the comparison, thanks to the flexible structure of the CoDel algorithm, it gives better results in terms of packet loss and fairness index value under DDoS attacks.
... We leveraged the multipath power models derived by some of these work, and use them to characterize multipath energy consumption in real-world. [128,99,90]. Prioritizing traffic from certain applications using perclass queuing is also recommended by IETF as one of the best practices for active queue management on network devices [56]. ...
Thesis
In today's rapidly growing smartphone society, the time users are spending on their smartphones is continuing to grow and mobile applications are becoming the primary medium for providing services and content to users. With such fast paced growth in smart-phone usage, cellular carriers and internet service providers continuously upgrade their infrastructure to the latest technologies and expand their capacities to improve the performance and reliability of their network and to satisfy exploding user demand for mobile data. On the other side of the spectrum, content providers and e-commerce companies adopt the latest protocols and techniques to provide smooth and feature-rich user experiences on their applications. To ensure a good quality of experience, monitoring how applications perform on users' devices is necessary. Often, network and content providers lack such visibility into the end-user application performance. In this dissertation, we demonstrate that having visibility into the end-user perceived performance, through system design for efficient and coordinated active and passive measurements of end-user application and network performance, is crucial for detecting, diagnosing, and addressing performance problems on mobile devices. My dissertation consists of three projects to support this statement. First, to provide such continuous monitoring on smartphones with constrained resources that operate in such a highly dynamic mobile environment, we devise efficient, adaptive, and coordinated systems, as a platform, for active and passive measurements of end-user performance. Second, using this platform and other passive data collection techniques, we conduct an in-depth user trial of mobile multipath to understand how Multipath TCP (MPTCP) performs in practice. Our measurement study reveals several limitations of MPTCP. Based on the insights gained from our measurement study, we propose two different schemes to address the identified limitations of MPTCP. Last, we show how to provide visibility into the end- user application performance for internet providers and in particular home WiFi routers by passively monitoring users' traffic and utilizing per-app models mapping various network quality of service (QoS) metrics to the application performance.
... Bu algoritmalar ağ kullanımı, paket kaybı ve farklı trafik yükleri için uyarlana bilirlik gibi özelliklerin bazılarını veya tamamını iyileştirmeyi amaçlamıştır. Bu yapılar, kayıp tabanlı tıkanıklık kontrolü [8], [9] gecikme tabanlı tıkanıklık kontrolü [10], [11] ve hız tabanlı tıkanıklık kontrolü [12], [13] gibi sorunlara çözümler önermişlerdir. Önerilen bu Aktif Kuyruk Yönetim Algoritmaları (Active Queue Management) (AQM)'ler kapalı yöntem [14], [15], açık yöntem [16], [17] ve gecikme bazlı yöntemler [18], [19] ile sorunları çözmeye çalışmışlardır. ...
Article
Full-text available
Mobil iletişiminin hızlı gelişmesiyle internet içeriğinin çoğu günümüzde hücresel ağlar ile sağlanmaktadır. Hücresel ağlarda yaşanan tıkanıklık sırasında kullanılan algoritmalar paket gecikmesi, kuyruk taşması ve darboğaz problemlerini çözmeye çalışmaktadırlar. LTE ağlarında remote-host ile PG-W düğümü arasındaki veri transferi yüksek hız gerektirmekte bu da hücresel ağın çalışma hızını doğrudan etkilemektedir. Doğru bir kuyruk yönetim algoritmasının seçilmesi LTE hücresel ağı için kritik bir önem kazanmaktadır. Bu çalışmada LTE ağlarında remote-host ile PG-W arasında çalışan aktif kuyruk yönetim algoritmaları olan RED, CoDel, Pie ve pFIFO'nun performanslarının, uçtan uca ortalama verim, gecikme ve paket düşürme oranları üzerindeki etkisi karşılaştırmalı olarak incelenmiştir ve sonuçları değerlendirilmiştir. Abstract With the rapid development of the mobile communication, most of the internet content is provided by cellular networks. The algorithms used during the congestion in cellular networks try to solve the problem of queue overflow and bottleneck. In the LTE network, the data transfer between the remote host and the PG-W directly affects the operating speed of the cellular network. Choosing an active queue management algorithm is critical to the LTE cellular network. In this study, the effect of the performance of RED, CoDel, Pie and PFIFO, which are active queue management algorithms operating between Remote Host and PG-W, on LTE network on end-to-end average throughput, latency and packet dropping rates were evaluated comparatively and the results were evaluated.
... To address the problem of network congestion and its related performance issues, the research community and industrial organization have designed many TCP congestion control algorithms and Active Queue Management (AQM) algorithms to improve the system performance. They have given various solutions to the problem such as receiver window control [2,3], loss-based congestion control [5,6], delay-based congestion control [7,8], and rate-based congestion control [9,10] which come under TCP congestion control. In AQM techniques, there are various methods such as implicit method [11,12], explicit method [13,14] and delay based methods [15,16]. ...
Article
During congestion in LTE network, most of the algorithms succeed in either address the queue overflow problem or bufferbloat problem through implicit congestion control mechanism or using flow control technique. The flow control technique solves the bufferbloat problem but it affects the users while competing for shared resource in coexistence with conventional TCP or UDP users. The implementation of explicit congestion notification in LTE eNodeB is difficult because the network layers do not support it. Therefore, in this paper, congestion identification and feedback mechanisms are proposed to reduce the queuing delay and queue overflow problem by varying the congestion window size at sender based on the estimation of congestion level at eNodeB. The extensive experimental results illustrate that the proposed algorithms reduce the delay of the packets and increase the fairness among the users compared to drop-tail, RED, STRED and PIE methods. Moreover, the Packet Delivery Fraction (PDF) has been increased and further throughput of the system is maintained. In addition to that, it competes with other users of conventional methods for the shared resources in a fair manner.
... It has been mentioned that for a specific buffer range, real time traffic losses increase as the buffer size increases, due buffer sharing dynamics at routers between real time and elastic traffic. The queuing delay is closely related to buffer sizes of routers (Iya et al. 2015). The issue of Bufferbloat deserves mention, though known since three decades. ...
Article
Full-text available
TCP is the dominant protocol in the Internet and delay friendliness of TCP is a well studied, complex and a practical issue. While all other components of end-to-end delay have been adequately characterized, the issue regarding queuing delay and the number of re-transmissions which are random in nature, add to uncertainty in the Internet. The datagrams at the router are from number of multiplexed flows’ M(n) which constitute a stochastic process. This paper examines queuing behavior subject to multiplexing a stochastic process M (n) of flows’. The arrival and service processes at the bottleneck router have been mathematically modeled accounting for different proportions of TCP and UDP with their datagram sizes as parameters. This model is then used to evaluate the mean queuing delay for datagrams for different fractions of TCP in the background traffic. Further using discrete queue analysis of datagrams, estimation of average instantaneous delay for datagrams of a tagged flow has been derived to ascertain the delay impact of multiplexing of flows’ on datagrams of a tagged flow by fellow flows’. The analysis divulges an intriguing behavior that flows’ multiplexing ‘harms’ the queuing delay. It has been observed that mean queuing delay for datagrams and average instantaneous delay for datagrams of a typical flow get degraded with the increased number of fellow TCP flows’ in the background traffic.
... us, there is overflow of RLC queues due to the large volume of traffic in a short period of time leading to high delay resulting in poor performance. To guarantee high throughput and low delay during congestion, the researchers have proposed various methods such as buffer-aware scheduling [5][6][7][8], active queue management (AQM) techniques [9][10][11][12], receiver window control [13][14][15], loss-based congestion control [16][17][18], delay-based congestion control [16,19,20], rate-based congestion control [4,20,21], admission and congestion control [22][23][24][25], and resource starvation [26][27][28]. ...
Article
Full-text available
The cellular network keeps the vast capacity of queue space at eNodeBs (base stations) to reduce the queue overflow during the burst in data traffic. However, this adversely affects the delay sensitive applications and user quality of experience. Recently, few researchers have focused on reducing the packet delay, but it has a negative impact on the utilization of network resource by the users. Further, it fails to maintain fairness among the users, when competing for a shared resource in coexistence with conventional TCP or UDP users. Therefore, in this paper, the adaptive receiver-window adjustment (ARWA) algorithm is proposed to efficiently utilize the network resources and ensure fairness among the users in resource competitive environment, which requires slight modification of TCP at both the sender and receiver. The proposed mechanism dynamically varies the receiver window size based on the data rate and delay information of the packets, to enhance the performance of the system. Based on extensive experiments, the results illustrate that the ARWA algorithm reduces the delay of TCP packet and increases fairness among the users. In addition to that, it enhances the packet delivery fraction (PDF) and maintains the throughput of the system. Moreover, it competes with other conventional TCP users for the shared network resources in a fair manner.
... QoE Aware Traffic Management. Traffic prioritization is a known technique to mitigate in-network bufferbloat [46], [31], [29]. Prioritizing traffic from certain applications using per-class queuing is also recommended by IETF as one of the best practices for active queue management on network devices [14]. ...
... Authors have found that, when queue management algorithms are present on the path, an undesired phenomenon occurs causing LEDBAT flows to fairly share the bandwidth with TCP flows. The closest work to ours is [13] that considers SVC-based Congestion Control (SCC), a recently proposed delay-based congestion control algorithm for interactive video, and TCP CUBIC in the case of queues governed either by DropTail or PIE. Ns-2 simulation results show that SCC is not able to grab the fair share when competing with TCP flows in the case DropTail queues are employed. ...
Article
Real-time media communication requires not only congestion control, but also minimization of queuing delays to provide interactivity. In this work we consider the case of real-time communication between web browsers (WebRTC) and we focus on the interplay of an end-to-end delay-based congestion control algorithm, i.e. the Google congestion control (GCC), with two delay-based AQM algorithms, namely CoDel and PIE, and two ow queuing schedulers, i.e. SFQ and Fq Codel. Experimental investigations show that, when only GCC ows are considered, the end-to-end algorithm is able to contain queuing delays without AQMs. Moreover the interplay of GCC ows with PIE or CoDel leads to higher packet losses with respect to the case of a DropTail queue. In the presence of concurrent TCP traffic, PIE and CoDel reduce the queuing delays with respect to DropTail at the cost of increased packet losses. In this scenario ow queuing schedulers offer a better solution.
Thesis
The network technologies that underpin the Internet have evolved significantly over the last decades, but one aspect of network performance has remained relatively unchanged: latency. In 25 years, the typical capacity or "bandwidth" of transmission technologies has increased by 5 orders of magnitude, while latency has barely improved by an order of magnitude. Indeed, there are hard limits on latency, such as the propagation delay which remains ultimately bounded by the speed of light.This diverging evolution between capacity and latency is having a profound impact on protocol design and performance, especially in the area of transport protocols. It indirectly caused the Bufferbloat problem, whereby router buffers are persistently full, increasing latency even more. In addition, the requirements of end-users have changed, and they expect applications to be much more reactive. As a result, new techniques are needed to reduce the latency experienced by end-hosts.This thesis aims at reducing the experienced latency by using end-to-end mechanisms, as opposed to "infrastructure" mechanisms. Two end-to-end mechanisms are proposed. The first is to multiplex several messages or data flows into a single persistent connection. This allows better measurements of network conditions (latency, packet loss); this, in turn, enables better adaptation such as faster retransmission. I applied this technique to DNS messages, where I show that it significantly improves end-to-end latency in case of packet loss. However, depending on the transport protocol used, messages can suffer from Head-of-Line blocking: this problem can be solved by using QUIC or SCTP instead of TCP.The second proposed mechanism is to exploit multiple network paths (such as Wi-Fi, wired Ethernet, 4G). The idea is to use low-latency paths for latency-sensitive network traffic, while bulk traffic can still exploit the aggregated capacity of all paths. This idea was partially realized by Multipath TCP, but it lacks support for multiplexing. Adding multiplexing allows data flows to cooperate and ensures that the scheduler has better visibility on the needs of individual data flows. This effectively amounts to a scheduling problem that was identified only very recently in the literature as "stream-aware multipath scheduling". My first contribution is to model this scheduling problem. As a second contribution, I proposed a new stream-aware multipath scheduler, SRPT-ECF, that improves the performance of small flows without impacting larger flows. This scheduler could be implemented as part of a MPQUIC (Multipath QUIC) implementation. More generally, these results open new opportunities for cooperation between flows, with applications such as improving WAN aggregation.
Conference Paper
Full-text available
Bufferbloat is a phenomenon where excess buffers in the network cause high latency and jitter. As more and more interactive applications (e.g. voice over IP, real time video conferencing and financial transactions) run in the Internet, high latency and jitter degrade application performance. There is a pressing need to design intelligent queue management schemes that can control latency and jitter; and hence provide desirable quality of service to users. We present here a lightweight design, PIE (Proportional Integral controller Enhanced), that can effectively control the average queueing latency to a reference value. The design does not require per-packet extra processing, so it incurs very small overhead and is simple to implement in both hardware and software. In addition, the design parameters are self-tuning, and hence PIE is robust and optimized for various network scenarios. Simulation results, theoretical analysis and Linux testbed results show that PIE can ensure low latency and achieve high link utilization under various congestion situations.
Conference Paper
Full-text available
There have been several studies in the past years that investigate the impact of network delay on multi-user applications. Primary examples of these applications are real-time multiplayer games. These studies have shown that high network delays and jitter may indeed influence the player's perception of the quality of the game. However, the proposed test values, which are often high, are not always representative for a large percentile of on-line game players. We have therefore investigated the influence of delay and jitter with numbers that are more representative for typical access networks. This in effect allows us to simulate a setup with multiplayer game servers that are located at ISP level and players connected through that ISP's access network. To obtain further true-to-life results, we opted to carry out the test using a recent first person shooter (FPS) game, Unreal Tournament 2003. It can, after all, be expected that this new generation of games has built-in features to diminish the effect of small delay values, given the popularity of playing these games over the Internet. In this paper, we have investigated both subjective perceived quality and objective measurements and will show that both are indeed influenced by even these small delay and jitter values.
Conference Paper
Full-text available
Traditional loss-based TCP congestion control (CC) tends to induce high queuing delays and perform badly across paths containing links that exhibit packet losses unrelated to congestion. Delay-based TCP CC algorithms infer congestion from delay measurements and tend to keep queue lengths low. To date most delay-based CC algorithms do not coexist well with loss-based TCP, and require knowledge of a network path’s RTT characteristics to establish delay thresholds indicative of congestion. We propose and implement a delay-gradient CC algorithm (CDG) that no longer requires knowledge of path-specific minimum RTT or delay thresholds. Our FreeBSD implementation is shown to coexist reasonably with loss-based TCP (NewReno) in lightly multiplexed environments, share capacity fairly between instances of itself and NewReno, and exhibits improved tolerance of non-congestion related losses (86% better goodput than NewReno in the presence of 1% packet losses).
Article
The Internet has recently been evolving from homogeneous congestion control to heterogeneous congestion control. Several years ago, Internet traffic was mainly controlled by the traditional RENO, whereas it is now controlled by multiple different TCP algorithms, such as RENO, CUBIC, and Compound TCP (CTCP). However, there is very little work on the performance and stability study of the Internet with heterogeneous congestion control. One fundamental reason is the lack of the deployment information of different TCP algorithms. In this paper, we first propose a tool called TCP Congestion Avoidance Algorithm Identification (CAAI) for actively identifying the TCP algorithm of a remote Web server. CAAI can identify all default TCP algorithms (e.g., RENO, CUBIC, and CTCP) and most non-default TCP algorithms of major operating system families. We then present the CAAI measurement result of about 30$\,$000 Web servers. We found that only $3.31 \% \sim 14.47 \%$ of the Web servers still use RENO, 46.92% of the Web servers use BIC or CUBIC, and $14.5 \% \sim 25.66 \%$ of the Web servers use CTCP. Our measurement results show a strong sign that the majority of TCP flows are not controlled by RENO anymore, and a strong sign that the Internet congestion control has changed from homogeneous to heterogeneous.
Conference Paper
Real-time media applications often ignore ongoing congestion if there is no option to reduce the rate. These applications pose a threat to themselves and other traffic. Reducing the transmission rate requires reducing the amount of packets rather than spreading the transmission over a longer interval. Loss-based congestion control mechanisms are unsuitable for this requirement. Also this rate reduction with popular video codecs, e.g. MPEG4, is often problematic. This paper investigates the problems associated with real-time video transmission over the Internet. We investigate a rate control method minimizing delay and losses and report preliminary but promising results.
Article
The Real-time Transport Protocol (RTP) is used to transmit media in multimedia telephony applications, these applications are typically required to implement congestion control. This document describes the test cases to be used in the performance evaluation of such congestion control algorithms.
Article
The distribution of videos over the Internet is drastically transforming how media is consumed and monetized. Content providers, such as media outlets and video subscription services, would like to ensure that their videos do not fail, start up quickly, and play without interruptions. In return for their investment in video stream quality, content providers expect less viewer abandonment, more viewer engagement, and a greater fraction of repeat viewers, resulting in greater revenues. The key question for a content provider or a content delivery network (CDN) is whether and to what extent changes in video quality can cause changes in viewer behavior. Our work is the first to establish a causal relationship between video quality and viewer behavior, taking a step beyond purely correlational studies. To establish causality, we use Quasi-Experimental Designs, a novel technique adapted from the medical and social sciences. We study the impact of video stream quality on viewer behavior in a scientific data-driven manner by using extensive traces from Akamai's streaming network that include 23 million views from 6.7 million unique viewers. We show that viewers start to abandon a video if it takes more than 2 s to start up, with each incremental delay of 1 s resulting in a 5.8% increase in the abandonment rate. Furthermore, we show that a moderate amount of interruptions can decrease the average play time of a viewer by a significant amount. A viewer who experiences a rebuffer delay equal to 1% of the video duration plays 5% less of the video in comparison to a similar viewer who experienced no rebuffering. Finally, we show that a viewer who experienced failure is 2.32% less likely to revisit the same site within a week than a similar viewer who did not experience a failure.
Article
The article aims to provide part of the buffer bloat solution, proposing an innovative approach to AQM suitable for today's Internet called CoDel. Packet networks require buffers to absorb short-term arrival rate fluctuations. Although essential to the operation of packet networks, buffers tend to fill up and remain full at congested links, contributing to excessive traffic delay and losing the ability to perform their intended function of absorbing bursts. The Internet has been saved from disaster by a constant increase in link rates and by usage patterns. Over the past decade, evidence has accumulated that this whistling in the dark cannot continue without severely impacting Internet usage. Developing effective active queue management has been hampered by misconceptions about the cause and meaning of queues. Network buffers exist to absorb the packet bursts that occur naturally in statistically multiplexed networks.
Conference Paper
The Internet has recently been evolving from homogeneous congestion control to heterogeneous congestion control. Several years ago, Internet traffic was mainly controlled by the traditional AIMD algorithm, whereas Internet traffic is now controlled by many different TCP algorithms, such as AIMD, BIC, CUBIC, and CTCP. However, there is very little work on the performance and stability study of the Internet with heterogeneous congestion control. One fundamental reason is the lack of the deployment information of different TCP algorithms. In this paper, we first propose a tool called TCP Congestion Avoidance Algorithm Identification (CAAI) for actively identifying the TCP algorithm of a remote web server. CAAI can identify all default TCP algorithms (i.e., AIMD, BIC, CUBIC, and CTCP) and most non-default TCP algorithms of major operating system families. We then present, for the first time, the CAAI measurement result of the 5000 most popular web servers. Among the web servers with valid traces, we found that only 16.85~25.58% of web servers still use the traditional AIMD, 44.51% of web servers use BIC or CUBIC, and 10.27$sim$19% of web servers use CTCP. In addition, we found that, for the first time, some web servers use non-default TCP algorithms, some web servers use some unknown TCP algorithms which are not available in any major operating system family, and some web servers use abnormal slow start algorithms. Our CAAI measurement results show a strong sign that the majority of TCP flows are not controlled by AIMD anymore, and a strong sign that the Internet congestion control has already changed from homogeneous to highly heterogeneous.
Article
Experimental data are presented that clearly demonstrate the scope of application of peak signal-to-noise ratio (PSNR) as a video quality metric. It is shown that as long as the video content and the codec type are not changed, PSNR is a valid quality measure. However, when the content is changed, correlation between subjective quality and PSNR is highly reduced. Hence PSNR cannot be a reliable method for assessing the video quality across different video contents.
IETF Recommendations Regarding Active Queue Management
  • F Baker
  • G Fairhurst
F. Baker and G. Fairhurst, "IETF Recommendations Regarding Active Queue Management," 2014, IETF Best Current Practice.
Quality of Service Design Overview
  • T Szigeti
  • C Hattingh
T. Szigeti and C. Hattingh, "Quality of Service Design Overview," in Cisco Press, 2004.
Flowqueue-codel (work in progress)
  • T Hoeiland-Joergensen
  • P Mckenney
  • D Taht
  • J Gettys
  • E Dumazet
T. Hoeiland-Joergensen, P. McKenney, D. Taht, J. Gettys, and E. Dumazet, " Flowqueue-codel (work in progress), " 2014, IETF.
  • T Hoeiland-Joergensen
  • P Mckenney
  • D Taht
  • J Gettys
  • E Dumazet
T. Hoeiland-Joergensen, P. McKenney, D. Taht, J. Gettys, and E. Dumazet, "Flowqueue-codel (work in progress)," 2014, IETF.