Efficient Internet traffic delivery over wireless networks
ABSTRACT The high demand for wireless Internet connectivity has driven the development of highly efficient radio link technologies. However, their performance can be compromised by inadvertent interactions with the higher-layer TCP flow control protocol. Maximizing the performance of wireless links requires that mechanisms operating at every layer of the protocol stack interact efficiently. This article provides a brief tutorial of some of these radio link enhancements. It then outlines how higher-layer flow control protocols should behave, and provides techniques for taming the behavior of TCP, to ensure that the performance of lower-layer enhancements is not compromised.
[show abstract] [hide abstract]
ABSTRACT: Mobile Ad Hoc network is a collection of mobile nodes that are arbitrarily located in such a manner that the interconnections between nodes are capable of dynamically changing on a continual basis. Reliable transport protocols such as TCP are tuned to perform well in traditional wired networks where packet losses occur mostly because of congestion. In wireless link packet losses (due to noise) are often more significant than congestion losses. Congestion avoidance results in a very low utilization of the link when there is an appreciable rate of losses due to link errors. This issue is very significant for wireless links. In this paper to overcome network performance problems, a cross layer design which helps in sharing the information while the layers separation at the design level is still maintained is proposed. This minimizes data retransmissions and losses by analyzing and reacting appropriately to different events occurring at lower layers. The route failure notification from MAC to network layer and the route change notification from network layer to MAC is brought up to the TCP layer, to evolve an efficient and reliable transport protocol in Ad Hoc networks. The mobile network simulator GLOMOSIM is used to study the performance of the network.FULL PAPER International Journal of Recent Trends in Engineering. 12/2009; 2.
Conference Proceeding: Modelling Event Communication to Enable Adaptive Behaviour in Resource- Constrained Distributed Virtual Environments.2006 International Conference on Autonomic and Autonomous Systems (ICAS 2006), 16-21 July 2006, Silicon Valley, California, USA; 01/2006
Conference Proceeding: On the Impact of RLC Layer Configuration Parameters in UMTS Internet Access[show abstract] [hide abstract]
ABSTRACT: The main goal of this paper is the analysis of the interaction between TCP and the UMTS link-level layer (RLC) to determine an optimal configuration for the parameters of both protocols. For this purpose, a Web browsing simulation framework has been considered. This framework includes a Web client accessing to remote HTTP servers through the UTRAN and Internet. The impact of the RLC configuration parameters on the behaviour of the multiple TCP connections simulated under this framework is analysed, showing their effect on Web traffic performance.Vehicular Technology Conference, 2006. VTC-2006 Fall. 2006 IEEE 64th; 10/2006
Efficient Internet Traffic Delivery over
Rami G. Mukhtar, Stephen V. Hanly and Lachlan L. H. Andrew
June 16, 2003
The high demand for wireless Internet connectivity has driven the de-
velopment of highly efficient radio link technologies. However, their per-
formance can be compromised by inadvertent interactions with the higher
layer TCP flow control protocol. If we are to maximize the performance
of wireless links, then we must ensure that mechanisms operating at every
layer of the protocol stack interact efficiently. In this article we provide a
brief tutorial of some of these radio link enhancements. We then outline
how higher layer flow control protocols should behave, and outline tech-
niques for taming the behavior of TCP, to ensure that the performance of
lower layer enhancements are not compromised.
Randomness is an inherent characteristic of wireless communications. It is
a feature that stems from a combination of physical phenomena, including
specular reflections, and multiple radio propagation paths to the receiver.
Due to the mobility of users and/or other objects in their vicinity, these phe-
nomena will induce time variations in the quality of the channel between
transceivers, which is often referred to as fading. A great deal of wireless
research is focussed on mitigating these effects. Indeed, enormous progress
has been achieved over the past 50 years in this area, exploiting techniques
from coding, signal processing and information theory. Modern wireless
cellular telephony is built upon these advances.
The demand for wireless Internet access is growing, and new perfor-
mance requirements are emerging. Distinct from the fixed bandwidth and
latency requirements of traditional voice traffic, the majority of Internet traf-
fic is elastic. Elastic traffic is a broad class that encompasses many of the
most popular applications used on the Internet today, including the World
Figure 1: Overview of the Internet Stack Architecture.
Wide Web (WWW), File Transfer Protocol (FTP), Peer-to-Peer clients and
Electronic Mail. In contrast to voice traffic, there is a paramount objective
for elastic traffic: minimization of the total file transmission time. For many
applications, including the WWW, latency is still an important issue, but
there is no longer a hard constraint on latency, as there was for voice.
The hardware, software and protocols that facilitate data transmission
over the Internet can be categorized into distinct layers, as depicted in Fig-
ure 1. Data traffic generated by applications is passed to the transport layer.
In the case of the Internet, this is most often the Transmission Control Pro-
tocol (TCP). TCP is an end-to-end protocol that controls the rate at which
packets from a given source are injected into the network (i.e. flow con-
trol), with the objective of maximizing the performance of the network. The
link layer multiplexes multiple sources over a single link, to deliver data bits
efficiently and reliably over the physical channel.
The elastic nature of the majority of Internet traffic has motivated the
development of adaptive wireless transmission techniques, which are usu-
ally implemented at the link layer. These techniques obtain a diversity gain
by exploiting variations of the wireless channel quality both over time and
over the user population. Total overall throughput is increased, at the cost
of variable packet transmission latency. The user perceived performance of
these mechanisms is highly dependent on a combination of the application
requirements and the behavior of other mechanisms operating in different
layers of the Internet protocol stack. In essence, designing an efficient wire-
less network is a multi-layer discipline.
This article provides a brief tutorial of some of the most recent and pop-
ular of these link layer enhancements. It is then seen in Section 3 that the
performance of these optimizations can be jeopardized by the behavior of
flow control mechanisms operating at the transport layer, such as the TCP.
One popular solution to this problem is receiver side flow control, which is
Figure 2: Link Layer Scheduler with per user queueing.
discussed in Section 4. It is demonstrated how an algorithm suitable for this
type of deployment can be used to tame the behavior of TCP. This ensures
that it does not interfere with the performance of these lower layer optimiza-
tions, and enhances the performance perceived by the user.
2Link Layer Optimizations
2.1Packet Transmission Scheduling
Packet scheduling can considerably increase throughput by exploiting the
temporal fluctuations in users’ channel qualities. Consider the down link
mode of operation, where a mobile user is receiving data from a central base
station. Typically a base station will simultaneously service a number of
users. At any one time instant some users will enjoy better channel condi-
tions than others. In fact, it has been shown that the performance of wireless
systems can be enhanced by using multiple antennas to artificially induce
temporal fluctuations in channel quality . If a base station is limited to
transmitting data packets in a particular order, for example in the order that
they reside in a queue, it will be unable to capitalize on the temporal diver-
sity of the users’ channel qualities. This has led to proposals that segregate
users’ packets into separate buffers, providing the opportunity for a sched-
uler to select the optimal user to transmit to at an instant of time [2, 3, 4].
Figure 2 is a schematic depiction of a packet scheduler for k users. Each
user, i, has a queue of packets awaiting transmission, of length qi(t). Typi-
cally, a scheduler will decide the next queue to be serviced based on a com-
bination of measurements of the users’ current signal to noise ratios, a mea-
sure of the users’ mean channel rates (calculated over some time interval),
the users’ priorities, and the current size of the users’ queues. For exam-
ple, if a user temporarily suffers reduced channel quality, then the scheduler
might serve that user less frequently, giving preference to users with a higher
channel quality. When the user enjoys better channel conditions, the channel
rate to that user will then be increased, making up for the reduced capacity
earlier. These schemes count on temporal fluctuations in channel quality.
The simplest channel-aware scheduling policy selects the user with the
best channel quality. Although this scheduler has been shown to achieve the
maximum possible aggregate rate , it can lead to throughput starvation
for some users, and long term unfairness. To remedy this problem, practical
schedulers often take into account the average throughput a user receives
. Users that suffer from a lower long term average throughput are pref-
erentially scheduled. This leads to better long term fairness amongst the
users, at a cost of reduced overall capacity gain. Another approach is to
deem the base station’s transmission time (as opposed to throughput) to be
the resource that must be apportioned. Under this scheme, in the long term,
users are given an equal share of the base station’s transmission time. The
base station makes its scheduling decision based on the users’ instantaneous
channel qualities and the long term proportions of time that they receive
Scheduling decisions can only be made optimally if each of the queues
has packets queued, which are awaiting transmission. Emptying of a user’s
packet buffer at the base station, due to failures of the transport layer flow
control to maintain a supply of packets, can lead to suboptimal scheduling.
Thus, it is important that the transport layer flow control mechanism does
not inadvertently cause a user’s buffer to drain, by unnecessarily holding
packets up at the source.
2.2 Link Layer Rate Adaptation
Current wireless standards ensure reliable delivery by using link layer mech-
anisms to adapt their transmit rate to the channel quality. Nanda et al. 
provide an overview of current link layer techniques, and how they are ap-
plied in different cellular standards today. Incremental Redundancy (IR)
techniques effectively vary the code rate on a transmission slot by slot basis,
tracking fluctuations in the channel quality. Link Adaptation (LA) tech-
niques measure the average channel quality (over several transmission slot
Hence, IR provides a mechanism for faster rate adaptation than LA.
The net effect of these interacting rate adaptation mechanisms is that
the time series of successful packet transmission times, as perceived by a
transport layer flow control scheme, will be random and non-stationary. Its
behavior will vary on two distinct time scales. Statistics, such as the mean
transmission rate, will vary on a timescale much slower than a packet trans-
mission time, according to the choice of LA and packet scheduling policy.
On a much faster time scale, packet transmission times will themselves be
random, as a result of link layer retransmissions and IR rate adaptation.
3 Transport Layer Objectives
3.1 TCP: An Overview
TCP is the Internet’s most popular transport layer delivery service. It pro-
vides reliable data delivery to the application layer. TCP automatically or-
ganizes bytes into packets of an appropriate size and then ensures that they
are reliably delivered to their destination. TCP boasts several features, in-
cluding automatic packet retransmission and reordering, transmission error
detection, appropriate packet size discovery, and flow control. The last fea-
ture, flow control, is particularly relevant to this article. TCP’s flow control
algorithm ultimately determines how much data is buffered in the network,
in-flight between the source and destination. As highlighted in Section 2,
this is important in determining whether lower layer optimizations can oper-
ate effectively. Before further elaborating on what would be desirable from
a transport layer flow control system, let us briefly examine some pertinent
features of TCP’s flow control algorithm.
TCP is a window based flow control algorithm. That means that the
number of unacknowledged bytes that are sent into the network is limited
to at most a particular window size. The TCP window size is set to be the
minimum of the currently computed congestion window (CWND), which is
set by the sender, and the receiver’s advertised window (AWND), as adver-
tised by the receiver. The purpose behind AWND will be explained later.
At the sender, the TCP flow control algorithm attempts to mitigate conges-
tion within the network by controlling the value of CWND. The control
algorithm is in fact a combination of several distinct algorithms, which are
activated at various stages of a TCP connection’s lifetime. The two dom-
inant algorithms are: slow start and congestion avoidance. The slow start
algorithm increases the value of CWND at an exponential rate, doubling the
window size every round trip time. Starting at one packet, it usually termi-
nates when it fillsthe networkpipe and a packetloss occurs. The termination
of the slow start algorithm is followed by congestion avoidance. Congestion
avoidance increases the value of CWND linearly, at a rate of approximately
one packet per round trip time, probing for any spare capacity. Eventually, a
buffer overflow will result, and a packet loss will occur. This prompts TCP
to halve its window size. In the simplest sense, congestion avoidance is a
form of the Additive Increase Multiplicative Decrease algorithm (AIMD)
, with the decrease factor set to 0.5. Under normal operation, conges-
tion avoidance should dominate the evolution of the value of CWND. This
causes a sawtooth evolution of the window size, which is all too familiar to
people who have worked with the protocol. See Figure 3 for an illustration
of this behavior.
Let us consider the case when there is a single bottleneck link for a