ArticlePDF Available

Using SPDY to improve Web 2.0 over satellite links

Wiley
International Journal of Satellite Communications and Networking
Authors:

Abstract and Figures

During the last decade, the Web has grown in terms of complexity, while the evolution of the HTTP (Hypertext Transfer Protocol) has not experienced the same trend. Even if HTTP 1.1 adds improvements like persistent connections and request pipelining, they are not decisive, especially in modern mixed wireless/wired networks, often including satellites. The latter play a key role for accessing the Internet everywhere, and they are one of the preferred methods to provide connectivity in rural areas or for disaster relief operations. However, they suffer of high-latency and packet losses, which degrade the browsing experience. Consequently, the investigation of protocols mitigating the limitations of HTTP, also in challenging scenarios, is crucial both for the industry and the academia. In this perspective, SPDY, which is a protocol optimized for the access to Web 2.0 contents over fixed and mobile devices, could be suitable also for satellite links. Therefore, this paper evaluates its performance when used both in real and emulated satellite scenarios. Results indicate the effectiveness of SPDY if compared with HTTP, but at the price of a more fragile behavior when in the presence of errors. Besides, SPDY can also reduce the transport overhead experienced by middleboxes typically deployed by service providers using satellite links.
Content may be subject to copyright.
INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING
Int. J. Satell. Commun. Network. (2016)
Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/sat.1185
Using SPDY to improve Web 2.0 over satellite links
Andrea Cardaci1, Luca Caviglione2,,, Erina Ferro1and Alberto Gotta1
1Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), 56124 Pisa, Italy
2Institute of Intelligent Systems for Automation (ISSIA), National Research Council of Italy (CNR), Via de Marini 6,
16149 Genoa, Italy
SUMMARY
During the last decade, the Web has grown in terms of complexity, while the evolution of the HTTP (Hypertext
Transfer Protocol) has not experienced the same trend. Even if HTTP 1.1 adds improvements like persistent
connections and request pipelining, they are not decisive, especially in modern mixed wireless/wired networks,
often including satellites. The latter play a key role for accessing the Internet everywhere, and they are one of
the preferred methods to provide connectivity in rural areas or for disaster relief operations. However, they suffer
of high-latency and packet losses, which degrade the browsing experience. Consequently, the investigation of
protocols mitigating the limitations of HTTP, also in challenging scenarios, is crucial both for the industry and the
academia. In this perspective, SPDY, which is a protocol optimized for the access to Web 2.0 contents over fixed
and mobile devices, could be suitable also for satellite links. Therefore, this paper evaluates its performance when
used both in real and emulated satellite scenarios. Results indicate the effectiveness of SPDY if compared with
HTTP, but at the price of a more fragile behavior when in the presence of errors. Besides, SPDY can also reduce
the transport overhead experienced by middleboxes typically deployed by service providers using satellite links.
Copyright © 2016 John Wiley & Sons, Ltd.
Received 15 February 2016; Revised 11 April 2016; Accepted 19 May 2016
KEY WORDS: SPDY; Web 2.0; HTTP/HTTP1.1; satellite networking; performance-enhancing proxies
1. INTRODUCTION
In less than a decade, the World Wide Web completely changed its shape. Nowadays, it is highly
dynamic, also merged with services encouraging social interaction and the sharing of multimedia
materials. Besides, it is also used to remotely access full-featured applications, as it happens in the
software-as-a-service paradigm [1]. Such an evolution, commonly defined with the Web 2.0 hyponym,
is still built on the original architecture of pages composed by two types of objects: the main object
containing the HTML and a variable number of linked inline objects. This imposed the need of sup-
porting the more interactive and content-rich nature of Web 2.0, which culminated into an explosion
of inline objects, both in size and number [2]. As a consequence, the original version of HTTP fails to
handle Web contents characterized by an increased complexity; hence, its update is mandatory.
To this aim, HTTP 1.1 introduces improvements like persistent connections and requests’ pipelin-
ing, but they are not decisive, still resulting in poor performances in terms of page loading time
(PLT) for many modern Web destinations [3]. Even if many workarounds in the management of
objects and engineering of pages have been proposed, the root of the problem still lies within the
HTTP architecture.
Another important aspect influencing the performance of HTTP and Web 2.0 concerns the intrinsic
mobile nature of the Internet. In fact, modern network architectures are highly heterogeneous, and they
Correspondence to: Luca Caviglione, Institute of Intelligent Systems for Automation (ISSIA), National Research Council
(CNR), Via de Marini 6, I-16149 Genoa, Italy.
E-mail: luca.caviglione@ge.issia.cnr.it
Copyright © 2016 John Wiley & Sons, Ltd
A. CARDACI ET AL.
mix wired trunks with different types of wireless accesses, such as IEEE 802.11, Long Term Evolution,
and satellites. The latter are still the preferred tool to provide connectivity for public protection and
disaster relief purposes, to bring the Internet in rural areas, or to reduce the digital divide in developing
countries [4–6]. Unfortunately, they are often characterized by high delays (e.g., 260 ms one way,
for the case of geostationary satellites) or severe packet losses. As a consequence, sophisticated Web
2.0 services, such as websites embedding plug-ins for social interactions or mash-up contents, might
be not easily accessed from satellite channels, at least with a proper quality of experience (QoE) [7].
Additionally, improvements introduced by HTTP 1.1, such as the more flexible pipelining architecture,
could decrease the performances because they inflate the overall round-trip time (RTT) [8].
Thus, new protocols are needed to enhance the behavior of Web 2.0 contents when accessed over
satellite channels. For instance, a relevant amount of work has been carried out on the transmission
control protocol (TCP); see, for example, [9] and references therein. For the case of Web 2.0, one of
the most interesting approaches has been proposed by Google, and it is named SPDY [10]. Put briefly,
it resembles an evolution of the HTTP, and it has been engineered to counteract issues and performance
degradations typical of links used by mobile devices, such as cellular radio and IEEE 802.11. Besides,
SPDY also offers some techniques to better handle Web 2.0 contents, eventually relieving website
developers from implementing ad hoc solutions. Its preliminary assessment over wired/wireless links
shows improvements in the range of 27 60% [11, 12], while more recent investigations over cellular
radio underline a reduced performance, mainly because of the complex cross-layer nature of the carrier
[13]. However, in simpler settings, some of its features (e.g., the native support for header compression)
still lead to relevant improvements [14].
In this perspective, SPDY would also be able to improve the access to Web 2.0 via satellite channels.
In fact, our past works considering mixed wireless local area network/satellite accesses through emu-
lated settings indicate non-negligible performance gains [15–17]. Additionally, SPDY is a very mature
technology, thus offering a stable benchmark of the fast evolving and still uncertain HTTP 2.0–Web
2.0 panorama.
Therefore, this paper investigates SPDY to access Web 2.0 contents via a production quality Inter-
net service provider (ISP) offering connectivity through a GEO satellite. To take into account wider
sets of use cases, we also emulate some behaviors, such as longer delays and severe packet losses.
Nevertheless, as highlighted in [13], realistic deployments could have some pitfalls not revealed by
simulated/emulated experimental campaigns. Thus, trials in ISP environments are mandatory to have
a correct assessment of the protocol.
At the best of the authors’ knowledge, Caviglione et al. [18] is the only prior work considering the
performance of SPDY when used on a real satellite scenario. However, it mainly aims at providing
a general analysis of Web-related protocols used over satellites, as well as the impact of bandwidth
on demand schemes and specific issues of the HTTP. Instead, in this paper, we solely concentrate on
SPDY when used on a real satellite ISP, and its main contributions are as follows: (i) to showcase
the SPDY protocol as a tool to mitigate the performance degradations in place of network-centric
mechanisms, such as middleboxes; (ii) to evaluate the impact of its reduced transport complexity over
performance-enhancing proxies (PEPs); (iii) to discuss the development of a set of reusable tools to
conduct measurements campaigns; and (iv) to provide a performance evaluation of SPDY over a real
satellite ISP.
The remainder of the paper is structured as follows: Section 2 introduces the most popular Web
2.0 optimizations, as well as the SPDY protocol. Section 3 showcases the test bed, while Section 4
discusses the methodology used to perform tests. Section 5 presents the experimental results, and lastly,
Section 6 concludes the paper.
2. WEB 2.0 OPTIMIZATIONS AND SPDY
As said, HTTP 1.1 has been introduced to optimize the performance of Web 2.0, but it still requires
additional design efforts on contents to achieve satisfying results. On the contrary, SPDY has been engi-
neered to bring greater benefits without altering the architecture of the Web, thus becoming adopted in
many large sites (e.g., Twitter and Facebook).
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
In this section, we quickly review the most popular performance enhancements for Web 2.0 together
with their limitations. Then, we showcase SPDY as a possible solution to group optimizations within
a single entity. In the following, except when doubts arise, we refer to HTTP 1.1 as HTTP.
2.1. Pipelining, resource inlining and cascading style sheets image sprites
One of the most limiting aspects of HTTP, especially when used over high-latency channels, was its
inability to handle more than one request per RTT. Such a bottleneck has been removed in HTTP
1.1 by enabling requests to be pipelined (this practice is often defined as ‘pipelining’). Unfortunately,
because the flow of requests/responses must be ordered, head of line (HOL) blocking issues could
arise. In more details, HOL happens when an object blocks the whole queue, potentially postponing the
rendering of the page. Nevertheless, pipelining also has the following additional fragilities: (i) POST
methods cannot be easily parallelized, because proper locks are needed to avoid hazards caused by a
GET needing the response of a prior POST [19] and (ii) a client, being optional, is unable to know a
priori whether a server offers such a feature, thus requiring additional RTTs for the discovery phase.
When both endpoints support pipelining, the reference HTTP implementation imposes a maximum
of two connections per domain [19], which could be a too tight requirement for complex contents.
Hence, about the totality of browsers violate the protocol specification and use six concurrent connec-
tions (increased to eight for the case of Internet Explorer). To have a more aggressive retrieval of data
(e.g., images, scripts, and style sheets) and to bypass limitations of the standard protocol specification,
Web developers also distribute inline objects over multiple servers.
To increase the throughput in terms of resources per connection, it is common to embed an object
within another one, for example, injecting client-side scripts directly in the HTML. Such a mechanism
is defined as resource inlining, and it is very effective if objects are smaller than the size of the HTTP
header. As a drawback, this badly impacts over Web caches, as a change in the embedded object
invalidates also the parent.
A similar approach is the cascading style sheets (CSS) image sprites, where several images are
merged into a single larger one. Then, rules defined within the CSS are used to split the content back
to the original form. This method leads to a poor maintainability of sources; therefore, it is seldom
utilized in high-quality websites.
2.2. Compression and tuning of transmission control protocol
To better take advantage of the available bandwidth, HTTP supports compression of objects and head-
ers through the use of content type negotiation, that is, a client declares its ability to receive compressed
data via the Accept-Encoding:gzip header (even if references [20] and [21] show that 66%of
sites only actually send compressed responses).
As regards possible cross-layer optimizations, the efficiency of HTTP can be further expanded by
properly tuning some parameters of the transport. A typical tweak is adjusting the initial congestion
window (ICW) of the TCP as to avoid performance degradations over short-lived connections. Espe-
cially, to mitigate the impact of large RTTs, setting the ICW 15 KB nowadays is a quite common
practice [22].
2.3. SPDY
Compared with HTTP, SPDY uses a more rational design. In more details, enhancements are enclosed
in the protocol, rather than scattered over multiple entities. Also, they do not require content developers
to tweak the architecture of the page or make adjustments to inline objects. To prevent complexity,
as well as to use the available bandwidth in a more effective manner, SPDY relies upon a single-
connection architecture, which is based on the following considerations: (i) HTTP-related transport
flows are almost bursty and short lived, thus causing major overheads in terms of additional RTTs to
retrieve a page and (ii) avoiding the continuous creation of new connections ensures the congestion
window of the TCP to grow faster, also maintaining its ‘state’.
An important design choice of SPDY is that it must guarantee the backward compatibility with the
HTTP semantic. Consequently, it does not aim at replacing the HTTP; rather, it provides a multiplexing
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
A. CARDACI ET AL.
and priority service to be funneled over a single TCP connection, called session. Within a session,
a sequence number, defined as streamid, identifies each flow carrying encapsulated protocol mes-
sages. Finally, to solve the HOL blocking issues and to avoid the need of techniques like CSS inlining,
resources are multiplexed. In essence, the four major features of SPDY are as follows:
prioritized requests: each stream has a priority assigned, allowing the User-Agent actinginthe
browser to retrieve objects according to relevance criteria, for example, to make a page readable
as quick as possible, even if incomplete;
compressed headers: SPDY only enforces header compression, while offering the payload
processing as optional, for example, to prevent the duplication of such a procedure;
secure sockets: to provide authentication and encryption, SPDY solely uses transport layer secu-
rity. Also, to save additional RTTs, as well as achieving independence from the application layer,
SPDY endpoints must be compliant with Next Protocol Negotiation;
server pushed streams: because the server knows in advance objects needed to complete a page,
SPDY can send them ‘proactively’. In this manner, it can populate the browser’s cache and
mitigate latencies because of additional requests.
We underline that similarly to HTTP, also the behavior of SPDY highly depends on the underly-
ing transport layer. Therefore, to improve its performance, reference [23] suggests an ICW at least ten
times the maximum segment size, which is 15 KB. Luckily, ICW = 10 is the default choice for the
most popular operating systems (e.g., as kernel 2.6.38 Linux adopts such a value). Nevertheless, the
single-connection architecture of SPDY would benefit from an ICW constantly growing without sud-
denly shrinking. Then, to avoid connection resets triggering the slow start phase [24], it is a common
practice to set the tcp_slow_start_after_idle kernel/stack parameter to 0.
To further improve the bandwidth usage, SPDY offers a settings session-wide message to
negotiate parameters between endpoints. For instance, a typical conversation enables the client to
communicate the size of the ICW to the remote server.
3. DEVELOPMENT OF THE TEST BED
To have an effective comparison between HTTP and SPDY, we want to retrieve real Web destinations
via a production quality satellite ISP. To this aim, we had to face the following challenges: (i) the
number of SPDY-enabled services is still modest and mainly dealing with online social networks sites;
(ii) the extreme degree of personalization/mutability of Web contents introduces high variability, which
can reduce the accuracy of the overall measurements; (iii) to have a sufficient statistical relevance, we
need to properly automate repeated data collection and its organization; and (iv) to precisely understand
whether SPDY could be a replacement for HTTP over high delay links, we need to alter some structural
parameters of the ISP.
To solve requirements (i)–(iv), we developed an instrumented client and an SPDY proxy using
open-source components. Such tools can be also used to switch to an emulated platform if needed, for
instance, to add delays and errors. Figure 1 depicts the overall test bed deployed over the satellite ISP.
In more details, the client has been built on top of Google Chrome, which natively supports both
HTTP and SPDY and offers a comprehensive set of debugging and scripting features. To store traces of
pages, we used the HTTP Archive (HAR) file format, that is, a JavaScript Object Notation document
collecting a variety of statistics, such as the page size and time stamps for each inline object. Repeated
trials have been automatized via proper shell scripts and a Node.js module. To collect data for
packet-level analysis, the related traffic has been captured with the tcpdump network sniffer and stored
in different trace files.
As regards the proxy, it has been engineered to cope with the high degree of personalization and
mutability of Web 2.0 contents. In fact, because of user interactions such as comments/posts displayed
in a real-time manner and changing advertisements, the same page can consistently vary from two
adjacent fetches, even performed very close in time. Therefore, retrieving Web pages directly from the
server would lead to incoherent measurements. To remove any possible source of non-determinism,
all the contents used for the performance assessment are cached via Web page replay [25]. In essence,
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
it is a tool for making a snapshot of a page, and then, it acts both as a Web and a DNS server in the
replaying phase. We underline that using a proxy also enables to deliver via SPDY sites hosted on
servers that do not natively support this protocol.
To have a thorough understanding of the interaction of SPDY with protocol accelerators deployed
within the ISP network, we also need to modify the RTT to introduce an arbitrary packet loss and to
bypass PEPs. Even if we had a dedicated access, we were not able to alter the setup without impacting
over the normal functioning of the ISP. Therefore, for the round of tests without PEPs, we used netem
Figure 1. The test bed/protocol architecture used in our trials. TCP, transmission control protocol; PEP,
performance-enhancing proxy; IP, Internet protocol; ISP, Internet service provider; HAR, HTTP Archive;
SCPS-TP, Space Communications Protocol Specification-Transport Protocol.
Figure 2. System components of the HTTP proxy based on HTTP/TCP splitting present in our test bed. SCPS-TP,
Space Communications Protocol Specification-Transport Protocol.
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
A. CARDACI ET AL.
and dummynet properly attached and configured on the network interface controller of the client and
proxy, respectively.
The satellite ISP exploits the iDirect platform. Access terminals are iDirect Evolution X3 Satellite
Routers offering HTTP and TCP acceleration combined in a unique device. TCP is enhanced via
standard connection splitting mechanism, while HTTP relies upon the ‘Split HTTP’ proxy architecture,
as depicted in Figure 2.
In more details, the proxy is composed of two entities, namely, the remote proxy (RP) and the local
proxy (LP), and they are placed at the borders of the satellite channel, that is, to isolate the high-
latency link. We point out that RP and LP should not be confused with the SPDY proxy, which is
placed ‘outside’ the ISP and only used to make our tests coherent and comprehensive. While the RP is
a classical HTTP proxy, the LP has an enriched set of functionalities, for example, from the processing
of JavaScript code and the management of dynamic contents, to a simpler one, like the retrieval of
nested inline objects (but limited to the single-page dependency).
In essence, to mitigate the impact of the satellite link, the RP and LP split the HTTP conversations
and transfer the Web contents by using proper compression mechanisms and per-object scheduling
disciplines. To recap, for each page request, the following steps are performed:
1. the browser requests a page by issuing a GET;
2. the RP traps the request and extracts the related URL, which is forwarded to the LP;
3. the LP retrieves the main object, as well as the linked inline objects. If any, additional
information like cookies is also acquired and locally stored;
4. all data are compressed and delivered towards the original requestor via an ad hoc protocol. A
proper scheduling could be specified and applied to the resulting stream; and
5. as soon as the RP receives the main object, it determines the amount of resources ‘on the fly’
and starts to push back data to the browser.
To move data between RP and LP (step 4), the Space Communications Protocol Specification-
Transport Protocol (SCPS-TP) [26, 27] is used, which enables to stream multiple TCP connections via
a single SCPS-TP one. To achieve further improvements, resources can be delivered in parallel through
multiple SCPS-TP channels or multiplexed into a single SCPS-TP flow. We point out that the latter
resembles the approach of SPDY.
From the viewpoint of the HTTP, the architecture of Figure 2 leads to the following improvements:
(i) the three-way handshake of the TCP is performed over the ground portion of the network (i.e., the
one with lower RTT values); (ii) the parser within LP prevents a GET to be transmitted over the satellite
link; (iii) the proxying entities allow persistent connections at the transport layer, thus reducing the
number of slow-start phases; and (iv) compression reduces the amount of traffic.
The provided satellite channel is a Ku link using the time division multiple access and having
rates of 128 kbps for the upstream and 1 Mbps for the downstream, respectively. The satellite is the
Express AM44@11ıW bent-pipe GEO, operated by the Russian Satellite Communication Company.
The average RTT is 620 ms, also including the traversal of middleboxes and the wired trunks of
the network.
Lastly, to have a fair evaluation, all the best practices in terms of TCP tweaking presented in
Section 2.3 have been applied during the entire round of tests.
4. MEASUREMENT METHODOLOGY
In order to have an effective assessment of the performances of SPDY, we firstly had to choose an
effective synecdoche of the actual Web 2.0 panorama. To this aim, we decided to take as a reference
the ranking compiled by Google back in 2010 [21]. In fact, it contains sites having interesting features
potentially impacting both over HTTP and SPDY. Especially, many destinations have an average page
size of 320 Kbytes; on average, only the 2=3 of inline objects are actually compressed; at least 80%
of pages composing such sites have 10 or more inline objects retrieved from a single host. However,
as a consequence of the fast evolution of the Internet, when comparing such works with more recent
measurements (e.g., [2, 28, 29]), we noticed that Web 2.0 could not be effectively described with such
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
Table I. Number of transmission control protocol connections per site when using standard HTTP.
Rank Site name Nof connections Kbytes/page Requests N. domains
1 Wikipedia 17 111.13 18.97 3.00
2 Reddit 41 371.42 51.44 14.72
3 Flickr 14 499.28 17.34 4.24
4 Slashdot 50 712.62 48.76 10.84
5 BBC 88 950.73 85.01 12.00
6 Microsoft 58 1176.25 52.43 9.56
7 Huffington Post 173 1481.80 110.11 32.89
Rank is performed according to Kbytes per page on the wire. Reported values are averaged over the
entire dataset.
values anymore. The list available in [21] still contains the most popular and accessed Web destinations
but needs further investigations to properly select the most representative ones.
Therefore, after a preliminary set of measurements (see [15] and [16] for a detailed discussion),
showcased in Table I, we decided to select seven sites to have a properly mixed number of destinations.
Especially, they represent an accurate snapshot of sites designed to use a high-volume of inline objects,
also with many interdependencies, as it happens in the most complex Web 2.0 services. Moreover, their
features are general enough to allow modeling a wider variety of cases, thus making our investigation
more effective and general. Summing up, we selected the following set of websites:
Reddit and Slashdot: they represent services aggregating people to foster discussion. They are
based on a limited amount of large images, but also using a variety of small graphical elements
(often defined as thumbnails);
Huffington Post and BBC: they mainly provide news, also embedding videos and plug-ins to share
and discuss over online social networks. Consequently, both sites exploit a relevant number of
inline objects scattered across different domains, also containing additional software components;
Flickr and Microsoft: they have been selected as typical examples of pages crafted for showcasing
or advertising contents/products. Particularly, they include very large graphic elements, and plug-
ins to implement carousels or multimedia playback; and
Wikipedia: it relies on a simple/text-based layout, also reducing to the minimum the number of
external dependencies.
In more details, Table I reports the number of transport connections required to retrieve all the
objects that compose the selected sites when using standard HTTP. This can be adopted as a metric to
quantify the stress in terms of transport layer complexity a PEP or a middlebox has to handle. Such
values range from 17 (for the case of Wikipedia) to 173 (for the case of Huffington Post) and allow
to consider the content-richness nature of Web 2.0 applications, as well as the need of retrieving a
heterogenous set of objects likes JavaScript code or additional plug-ins.
Instead, when using SPDY, the number of transport connections always reduces to 1, which is a
direct consequence of the protocol architecture. In fact, all the data are sent via a single flow, and this
can be exploited to reduce the overheads experienced by PEPs, as well as to shift all the complexity of
an ISP to the border, in this case, into a software layer running into end nodes.
The set considered is also characterized by wide variations in terms of page sizes (denoted as
Kbytes/page in Table I). In fact, in the normal daily usage of the Web, users also access simple
destinations. Thus, we included sites like Wikipedia and Reddit as to avoid the pitfall of having a
biased performance evaluation, that is, as it can happen for protocols primarily optimized to handle
complex sites.
We also considered the ‘implementation’ of a given site by taking into account the number of
requests and domains accessed. In this extent, the former indicates sites having a quite relevant over-
head in terms of HTTP conversations. The latter offers an idea on how many domains are used to store
objects to increase the degree of parallelism of the retrieval phases, as well as the ‘mashed’ nature of
Web 2.0 sites.
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
A. CARDACI ET AL.
As regards the trials, they have been performed in three different satellite configurations: the one in
Figure 1 with the real ISP (denoted in the following as ‘PEP’ to emphasize the presence of a middle-
box) and by using netem+dummynet to emulate round trip delays of 520 and 720 ms. In this case,
bandwidths have been set to 1 Mbit/s and 256 kbit/s, in the forward and return link, respectively (see,
e.g., [30] for a discussion on tuning emulated satellite test beds starting from real measurements). In
addition, the SPDY proxy depicted in Figure 1 has been used in all configurations to add different
packet losses.
For each configuration, we retrieved all the sites of Table I by using both HTTP and SPDY. Each
test has been repeated 20 times, resulting into statistics and timings of about 38,000 objects. To handle
and process such data, we used an Structured Query Language database and ad hoc scripts.
5. EXPERIMENTAL RESULTS
To have a basic characterization of the traffic, we preliminarily investigated the overall dataset. As
expected, we found a very high amount of TCP conversations mainly because of the content-richness
nature of Web 2.0 applications using a composite set of objects, plug-ins, or multi-providers mash-
ups. The only exception is Wikipedia, as it mainly exploits a text-based layout and does not embed
additional services, such as widgets à la Google Maps. From a ‘complexity’ viewpoint, SPDY surely
mitigates the number of transport connections traversing the middlebox. Thus, the number of sockets
to be handled is smaller, reflecting into less state information to be stored within a PEP or in less
overheads in the stack of mobile or limited-capabilities devices.
To better comprehend results, we recall that, when the PEP is deployed, two kinds of acceleration
are used. One acts by splitting TCP connections, thus enhancing both SPDY and HTTP. Another one
involves the processing of HTTP traffic. However, even if SPDY preserves the HTTP semantic, it
encrypts all the data, eventually becoming unrecognized by the PEP. Thus, such an improvement is
only applied to HTTP.
5.1. Page loading time on lossless and lossy links
In this section, we compare the PLT of HTTP and SPDY by showcasing the 95th percentile of the
repeated trials averaged over the set of considered sites.
Figure 3 depicts the PLT values collected when using an error-free satellite link. For the case of
RTT = 520 ms, SPDY grants smaller average times compared with HTTP. But when RTT = 720 ms,
such an enhancement vanishes, with HTTP performing the same. Instead, when RTT =620, the HTTP
performs slightly better than SPDY, mainly owing to the presence of the PEP deployed by the ISP.
Figure 4 portraits the PLT values when the satellite link introduces a packet loss of 1%. We point
out that, even if we performed tests with different losses, only results with 1% are shown, as they
are the most relevant, also representing the worst case. In fact, when in the presence of losses >1%,
Figure 3. Average page loading time (PLT) on a lossless link.
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
Figure 4. Average page loading time (PLT) on a 1% lossy link.
the browser usually fails to complete about the totality of transfers, thus aborting the rendering of the
hypertext and returning an error. When packet losses are present, the PLT experiences an increment of
11% with SPDY and 16% with HTTP. On the emulated platform, values are almost doubled, as the
lack of PEP increases the PLT to 18% for HTTP and to 31% for SDPY. Then, when large RTTs and
losses are not mitigated by PEP, SPDY seems to be less robust because of its single TCP connection
design, eventually causing an increase in the experienced PLTs. In addition, the variation range of PLT
is very limited with PEPs and is kept particularly small with SPDY on PEP.
However, as it will be shown later, even if the PLT is an effective and widely used parameter to assess
the performances of Web, it does not take into account QoE metrics. For instance, it fails to capture
the effective readability of a Web page or its level of completeness during the download process.
5.2. Throughput analysis
A relevant aspect useful to characterize the usage of network resources is given by the analysis of
the throughput. Similarly to the PLT, such a behavior does not efficiently capture how ‘promptly’ a
page is delivered to users. Instead, it gives some hints on how HTTP and SPDY react against latency
and losses introduced by the satellite channel, especially in terms of utilization of the transmission
resources available on the link.
In all the trials, both protocols experience higher throughputs when retrieving the Huffington Post, as
its content-rich nature enables the TCP to have a longer temporal horizon to increase the transmission
window and to exploit the available resources (i.e., to ‘fill the bandwidth pipe’). In more details, the
parallel connection flavor of HTTP enables to partially compensate for high latencies even when the
PEP is not deployed. For the case of SPDY, its single-connection blueprint leads to a less-aggressive
behavior in terms of throughput if compared with HTTP. As regards the PEP, its connection-splitting
nature enables to saturate all the available bandwidth both when used through HTTP and SPDY. When
in the presence of errors and high latencies, SPDY performs worse than HTTP in all scenarios, mainly
because of its more fragile nature rooted within the exploitation of a single connection. In other words,
a burst of lost packets will not distribute over multiple connections, but it concentrates on the single
flow causing the congestion control of the TCP to react. Table II presents the average values computed
over the entire dataset.
5.3. Packet size analysis
To better understand the impact of the Web 2.0 paradigm over the satellite link, we want to quantify the
improvement of SPDY in terms of the usage of transmission resources. Thus, we conducted an inves-
tigation on the average size of the protocol data units (PDUs) generated by each protocol. Specifically,
in all the considered scenarios, SPDY exhibits a reduced number of tiny PDUs (i.e., <80 bytes) if com-
pared against HTTP, despite the presence of the PEP. In more details, 60% of PDUs generated by
SPDY has a size in the range 1280–1500bytes, while for HTTP, this reduces to 28%. Such a behav-
ior is of particular importance in the case of satellite, because fewer packets reflect into less time spent
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
A. CARDACI ET AL.
Table II. Throughput for Wikipedia and Huffington Post.
RTT = 520 RTT = 620 RTT = 720
ploss = 0% HTTP SPDY HTTP SPDY HTTP SPDY
Wikipedia 149.40 135.40 141.17 132.20 117.00 111.80
Huffington Post 217.75 210.40 219.28 216.59 221.20 213.40
ploss = 1% HTTP SPDY HTTP SPDY HTTP SPDY
Wikipedia 89.40 123.80 93.67 90.65 97.00 89.40
Huffington Post 193.00 193.25 194.10 137.73 204.00 172.12
RTT, round-trip time.
Figure 5. HAR capturing for Wikipedia: green is the connect time, purple is the wait time, and gray is the receive
time. HAR, HTTP Archive; RTT, round-trip time; PEP, performance-enhancing proxy.
in accessing the channel. Therefore, the access to Web 2.0 contents through high-delay links should
not experience a loss of performance because of additional rounds of contention at the media access
control layer.
5.4. Per-object analysis
As said, the PLT only offers information on the time frame between the request of a page and its
completion (i.e., the last inline object linked against the hypertext is received) and does not allow to
quantify the QoE perceived by users. Therefore, we processed the HAR collected for each trial and
we investigated timing statistics with a per-object granularity. To this aim, we developed an ad hoc
software, which has been released under the open-source license [31]. Figure 5 depicts an example of
HAR waterfall diagram containing all the timing information for the objects composing the home page
of Wikipedia.
Specifically, three time statistics have been extracted:
block: it is the time spent by the browser to gain access to a free socket. This strictly depends on
the number of parallel connections supported. For the case of HTTP, it is equal to 6, while for
SPDY it is always equal to 1 because it relies upon a unique multiplexed transport flow;
wait: it is the time the client awaits before receiving a response from the server. In other words, it
is the time that the server uses to deliver the beginning of a response header; and
receive: it is the time needed to completely receive an object.
Figures 6, 7, and 8 show the CDFs of the aforementioned time values.
Specifically, the CDFs for the block time depicted in Figure 6 clearly highlights the impact of high
RTTs when HTTP is used. For what concerns SPDY, for more than 95% of the trials, the delays due to
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
Figure 6. CDF of block timings of the objects composing Web pages (different RTT values are marked with
vertical lines). RTT, round-trip time; PEP, performance-enhancing proxy.
Figure 7. CDF of wait timings of the objects composing Web pages (different RTT values are marked with
vertical lines). RTT, round-trip time; PEP, performance-enhancing proxy.
block conditions are mostly near 0. In more details, its single-socket architecture enables to constantly
feed the channel, thus avoiding overheads in accessing/creating different transport layer connections.
As regards the wait time showcased in Figure 7, because it represents the time frame between the
transmission of the request and the reception of the response header, it is at least always equal to
one RTT plus the service time needed by servers and PEP for processing. In our trials, SPDY does
not totally outperform HTTP. The main reason is due to the multiplexing policy used in the adopted
implementation. In fact, according to [10], the TCP provides a single stream of data on which SPDY
multiplexes multiple logical streams; thus, clients and servers must intelligently interleave data mes-
sages for concurrent sessions. Unfortunately, our setup does not exploit any predictive or optimized
frameworks, hence causing the serialization of inline objects to limit the performance.
The receive time presented in Figure 8 closely relates to how much the application layer takes
advantage of the available bandwidth. In all scenarios, we found that both protocols produce bursty
traffic, which interferes with flow control algorithms of the TCP. Also in this case, the policy used by
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
A. CARDACI ET AL.
Figure 8. CDF of receive timings of the objects composing Web pages (different RTT values are marked with
vertical lines). RTT, round-trip time; PEP, performance-enhancing proxy.
SPDY to interleave data coming from different streams can impact on the receive time. With respect to
Figure 8, we can notice that the PEP effectively mitigates the impact of high RTTs, when using HTTP.
Similarly, for the case of SPDY, the larger congestion window, as well its enhanced transport behavior,
allow to have similar benefits.
5.5. Discussion of results
As discussed, to effectively assess the performance of HTTP and SPDY, the presented metrics should
be considered as a whole. In other words, evaluating them separately does not allow to have any idea
on the user experience, which is a critical aspect for the usability of Web 2.0 contents via satellite links.
The first consideration concerns the PLT. Our investigation revealed that the lower values achieved
by HTTP are partially voided by the timing statistics characterizing the reception of inline objects
composing the page. In fact, SPDY reduces waiting times, thus making the reception of a complete
page more ‘responsive’. Such an improvement quickly vanishes when a PEP is deployed, hence making
the behaviors of the two protocols very similar. However, when using SPDY, the HTTP acceleration
deployed by the ISP is not applied, as the related traffic is encrypted. Therefore, SPDY can be used in
place of such a module, thus making middleboxes less complex and expensive.
A similar consideration can be carried out for the throughput. Numerical results indicate that the
main bottleneck is still the TCP, which can be mitigated by the PEP and the aggressive parallel connec-
tion flavor of HTTP. In this perspective, SPDY cannot saturate the available bandwidth as the HTTP
(even if parameters ruling congestion control algorithms have been finely tuned). Yet users perceive
similar browsing QoE for RTT = 520 ms and RTT = 620 ms, and for the case of RTT = 720 ms, SPDY
outperforms HTTP. This is mainly due to the reduced overheads in terms of HTTP headers and TCP
connection setup/tear-down procedures. Therefore, a greater throughput of data over the link does not
imply a greater throughput in terms of inline objects.
Lastly, in all scenarios, the errors severely influence the behaviors of SPDY, which can be par-
tially mitigated by the PEP or by using ad hoc countermeasures deployed in the lower layers of the
stack [32].
6. CONCLUSIONS AND FUTURE WORK
In this paper, we investigated SPDY as an alternative to HTTP for accessing content-rich Web 2.0
destinations via satellite links. To this aim, we developed an ad hoc client, a proxy, and a set of tools
to perform trials both in real and emulated satellite environments. Results revealed that in some cases
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
SPDY can be used in place of the HTTP acceleration part of the PEP, thus reducing both the complexity
and costs for the ISP. However, when in the presence of errors, its single-stream nature introduces
fragilities; thus, proper countermeasures should be used in the lower layers of the protocol stack (e.g.,
forward error control or coding scheme [33]).
Future work aims at improving the scheduling policy of SPDY, especially to find an optimal mapping
between a priority class and the HTML object(s), in order to minimize latencies in the rendering of a
Web page or, at least, for the most significant part of it.
ACKNOWLEDGEMENTS
This work has been partially funded by the European Space Agency (ESA) within the framework of the Satellite
Network of Experts (SatNex-III), CoO3, Task3, ESA contract no. 23089/10/NL/CLP. Thanks are due to Link Tele-
comunicazioni S.r.l. (http://www.linksrl.tv/) for providing the satellite experimental platform and, in particular, to
Ing. Ilaria Agostini, Ing. Giuseppe Spanò, and Mr. Roberto Parodi for the technical support and the preparation of
the satellite test bed.
REFERENCES
1. Hsu I. Multilayer context cloud framework for mobile Web 2.0: a proposed infrastructure. International Journal of
Communication Systems 2013; 26(5):610–625.
2. Caviglione L. Extending HTTP models to Web 2.0 applications: the case of social networks. In Proc. of the 4th IEEE Int.
Conference on Utility and Cloud Computing (UCC): Melbourne, Australia, December 2011; 361–365.
3. Souders S. High-performance Web sites, Communications of the ACM 2008; 51(12):36–41.
4. Caviglione L. Introducing emergent technologies in tactical and disaster recovery networks. International Journal of
Communication Systems 2006; 19(9):1045–1062.
5. Bartoli G, Fantacci R, Gei F, Marabissi D, Micciullo L. A novel emergency management platform for smart public safety.
International Journal of Communication Systems 2015; 28(5):928–943.
6. Fairhurst G, Caviglione L, Collini-Nocker B. FIRST: future internet – a role for satellite technology. In Proc. of the IEEE
Int. Workshop on Satellite and Space Communications, (IWSSC 2008): Siena, Italy, October 2008; 160–164.
7. Caviglione L. Can satellites face trends? The case of Web 2.0. In Proc. of the 2009 Int. Workshop on Satellite and Space
Communications (IWSSC 2009): Siena, Italy, September 2009; 446–450.
8. Chakravorty R, Clark A, Pratt I. Optimizing Web delivery over wireless links: design, implementation, and experiences.
IEEE Journal on Selected Areas in Communications;23(2):402–416.
9. Rendon-Morales E, Mata Diaz J, Alins J, Munoz JL, Esparza O. Performance evaluation of selected transmission control
protocol variants over a digital video broadcasting – second generation broadband satellite Multimedia System with QoS,
International Journal of Communication Systems 2013; 26(12):1579–1598.
10. Belshe M, Peon R. SPDY protocol, 2012. draft-mbelshe-httpbis-spdy-00, Network Working Group, IETF.
11. Welsh M, Greenstein B, Piatek M. SPDY performance on mobile networks. Available from: https://developers.google.com/
speed/articles/spdy-for-mobile,2012. [Last Accessed Feb. 2014].
12. Wang XS, Balasubramanian A, Krishnamurthy A, Wetherall D. Demystifying page load performance with WProf. In Proc.
of the 10th USENIX Conference on Networked Systems Design and Implementation (NSDI13), Feamster N, Mogul J (eds.)
USENIX Association: Berkeley, CA, USA, 2013, pp. 473–486.
13. Erman J, Gopalakrishnan V, Jana R, Ramakrishnan K. Towards a SPDY’ier mobile Web. In Proc. of the 9th ACM
Conference on Emerging Networking Experiments and Technologies. ACM: Santa Barbara, CA, USA, December 2013;
303–314.
14. Kim H, Yi G, Lim H, Lee J, Bae B, Lee S. Performance analysis of SPDY protocol in wired and mobile networks. In
Ubiquitous Information Technologies and Applications. Springer: Berlin Heidelberg, January 2014; 199–206.
15. Cardaci A, Caviglione L, Gotta A, Tonellotto N. Performance evaluation of SPDY over high latency satellite channels. In
Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Dhaou R, Beylot A-L, Montpetit
M-J, Lucani D, Mucchi L (eds.), PSATS 2013, LNICST 123, vol. 2013, Springer: Toulouse, France, 2013, pp. 123–134.
16. Cardaci A, Celandroni N, Ferro E, Gotta A, Davoli F, Caviglione L. SPDY – a new paradigm in Web technologies: per-
formance evaluation on a satellite link. In Proc. of the 19th Ka and Broadband Communications Navigation and Earth
Observation Conference. FGM Events LLC: Florence, Italy, October 2013; 239.
17. Caviglione L, Gotta A. Characterizing SPDY over high latency satellite channels. EAI Endorsed Transactions on Mobile
Communications and Applications December 2014; 114(15):1–10.
18. Caviglione L, Celandroni N, Collina M, Cruickshank H, Fairhurst G, Ferro E, Gotta A, Luglio M, Roseti C, Salam AA,
Secchi R, Sun Z, Coralli AV. A deep analysis on future Web technologies and protocols over broadband GEO satellite
networks. International Journal of Satellite Communications and Networking 2015; 33(5):451–472.
19. Fielding R, Gettys J, Mogul J, Frystyk H, Masinter L, Leach P, Berners-Lee T. Hypertext transfer protocol – HTTP/1.1,
IETF, Network Working Group, RFC 2616 1999:1–176.
20. Schneider F, Agarwal S, Alpcan T, Feldmann A. The new Web: characterizing AJAX traffic. In Passive and Active Network
Measurement. Springer Berlin: Heidelberg, 2008; 31–40.
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
A. CARDACI ET AL.
21. Ramachandran S. Web metrics: size and number of resources. Available from: https://developers.google.com/speed/articles/
web-metrics. [Last Accessed: April 2015].
22. Spatscheck O, Hansen JS, Hartman JH, Peterson LL. Optimizing TCP forwarder performance. IEEE/ACM Transactions on
Networking 2000; 8(2):146–157.
23. Dukkipati N, Refice T, Cheng Y, Chu J, Herbert T, Agarwal A, Jain A, Sutin N. An argument for increasing TCP’s initial
congestion window. ACM SIGCOMM Computer Communications Review 2010; 140:27–33.
24. Handley M, Padhye J, Floyd S. TCP congestion window validation, IETF, Network Working Group RFC 2010:2861.
25. Web page reply. Available from: https://github.com/chromium/web-page- replay. [Accessed on 6 June 2016].
26. Consultative Committee for Space Data Systems (CCSDS), Space Communications Protocol Specification-Transport
Protocol (SCPS-TP), recommendation for space data systems standards, CCSDS 714.0-B-1, no. 1, blue book, 1999.
27. Durst RC, Miller GJ, Travis EJ. TCP extensions for space communications, ACM/Kluwer Wireless Networks Journal
(WINET) 1997; 3(5):389–403.
28. Li W, Moore AW, Canini M. Classifying HTTP traffic in the new age. In Proc. of ACM SIGCOMM: Seattle, WA, USA,
2008; 17–22.
29. Ihm S, Pai VS. Towards understanding modern Web traffic. In Proc. of the 2011 ACM SIGCOMM Conference on Internet
Measurement Conference. ACM: Berlin, Germany, 2011; 295–312.
30. Gotta A, Potorti F, Secchi R. An analysis of TCP startup over an experimental DVB-RCS platform. In Proc. of the 2006
Int. Workshop on Satellite and Space Communications: Leganés, Spain, 2006; 176–180.
31. Automatic HAR capturer, available. Available from: https://github.com/cyrus-and/chrome-har-capturer [Accessed on 6
June 2016].
32. Kuo CI, Shieh CK, Hwang WS, Ke CH. Performance modeling of FEC-based unequal error protection for H. 264/AVC
video streaming over burst-loss channels. International Journal of Communication Systems 2014:1099–1131.
33. Davern P, Nashid N, Zahran A, Sreenan CJ. HTTP acceleration over high latency links. In Proc. of the 4th IFIP Int.
Conference on New Technologies Mobility and Security (NTMS): Paris, France, February 2011; 1–5.
AUTHORS’ BIOGRAPHIES
Andrea Cardaci (BS 2013) is an MSc student of the Computer Science and Networking
course at the University of Pisa. His interests are primarily focused on distributed systems,
computer networks, and high-performance computing.
Luca Caviglione (MS 2002-PhD 2006) is a Researcher of the Institute of Intelligent Sys-
tems for Automation, National Research Council of Italy, Genoa. He is a Work Group
Leader of the Italian IPv6 Task Force, a Contract Professor, and a Professional Engineer.
His current research interests include P2P systems, wireless communications, cloud archi-
tectures, and network security. Since 2011, he has been an Associate Editor of Transactions
on Emerging Telecommunications Technologies (Wiley).
Erina Ferro (MS 1975) received her Laurea Degree with distinction in Computer Sci-
ence from the University of Pisa, Italy, in 1975. Since 1976, Dr Ferro was with CNR
(National Research Council), where she is currently employed as a Director of Research
at the Institute CNR-ISTI (Istituto di Scienza e Tecnologie dell’Informazione ‘Alessandro
Faedo’) in Pisa. In 1989 and 1996, she obtained two patents for the design of the FODA
and FODA/IBEA systems for the satellite access, respectively. Her main research activities
are in wireless communications (satellite communications, terrestrial wireless communica-
tions, and sensor networks), especially sensor networks applied to cultural heritage, health
and well being, and AAL. She participated in many European projects, and she co-authored
more than 150 scientific papers published on international journals and congresses. Cur-
rently, she is scientifically responsible for the FP7 European DOREMI project, and she
coordinates the Smart Area Project of the CNR Research area in Pisa. Moreover, she started
several proposals for projects, now funded by the Tuscany region. She is Associated Editor of the International
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
USING SPDY TO IMPROVE WEB 2.0 OVER SATELLITE LINKS
Communication Systems journal. Dr Ferro is head of the Wireless Networks Laboratory (WNLAB) at CNR-ISTI
(http://www.isti.cnr.it/research/unit.php?unit=WN), and she is the reference person in the DIITET Department of
CNR (to which ISTI belongs) for the Smart Cities and Communities Research area.
Alberto Gotta (MS 2002-PhD 2007) is a Researcher at the Wireless Networks Laboratory
at CNR-ISTI, Italy. His expertise is mainly related to architectures for terrestrial wireless
and satellite networks applied in the context of ubiquitous networks for multimedia traffic
and environmental monitoring.
Copyright © 2016 John Wiley & Sons, Ltd Int. J. Satell. Commun. Network. (2016)
DOI: 10.1002/sat
... The TCP-based anti-congestion algorithm is not suited to a wireless environment filled with mutations [1]; instead, it is more suitable for the relatively stable access environment [2] [3]. The TCP retransmission is also designed to prevent router congestion, and due to the characteristics of wireless communication, the high air-interface latency and packet loss will seriously affect the overall transmission performance of TCP [4] [5]. In addition, TCP also has other issues like stream multiplexing. ...
... 4. Besides, QUIC has the overall framework of HTTP applications, including encryption, authentication, HTTP header compression and transport stream multiplexing, etc., which is much more convenient than implementing a UDP-based HTTP protocol by yourself. 5. QUIC add the size of an app. ...
... We do not aim for wide comparison between GQUIC and TCP for web services since this depends on the characteristics of the web pages 16 and such comparison should rather consider the top-visited web page. 13 We rather aim at identifying trends in the results and the performance of the convergence of the congestion control when the full HTTP stack is considered. We focus on the page load time (PLT) 22 metric as it returns similar trending results to visual QoE metrics for such simple pages. ...
Article
Full-text available
This article proposes a discussion on the strengths, weaknesses, opportunities, and threats related to the deployment of QUIC end‐to‐end from a satellite‐operator point‐of‐view. The deployment of QUIC is an opportunity for improving the quality of experience when exploiting satellite broadband accesses. Indeed, the fast establishment of secured connections reduces the transmission time of short files. Moreover, removing transport‐layer performance‐enhancing proxies reduces the cost of network infrastructures and improves the integration of satellite systems. However, the congestion and flow controls at end points are not always suitable for satellite communications due to the intrinsic high bandwidth‐delay product. Further acceptance of QUIC in satellite systems would be guaranteed if its performance in specific use cases were increased. Based on an emulated platform and on open‐source software, this paper proposes values of performance metrics as one piece of the puzzle. The final performance objective requires consensus among the different actors. The objective should at least provide acceptable performance for satellite operators to allow QUIC traffic but reasonable enough to keep QUIC deployable on the Internet.
... As shown, the presence of the covert channel mainly impacts on how ''quick" the pipeline of HTTP can be fed, thus causing many inline objects to be blocked. Therefore, a proper caching or optimized scheduling should be deployed as to counteract decays in the QoE [61]. ...
Article
Information hiding is increasingly used to implement covert channels, to exfiltrate data or to perform attacks in a stealthy manner. Another important usage deals with privacy, for instance, to bypass limitations imposed by a regime, to prevent censorship or to share information in sensitive scenarios such as those dealing with cyber defense. In this perspective, the paper investigates how VoIP communications can be used as a methodology to enhance privacy. Specifically, we propose to hide traffic into VoIP conversations in order to prevent the disclosure, exposure and revelation to an attacker or blocking the ongoing exchange of information. To this aim, we exploit the voice activity detection feature available in many client interfaces to produce fake silence packets, which can be used as the carrier where to hide data. Results indicate that the proposed approach can be suitable to enforce the privacy in real use cases, especially for file transfers. As interactive services (e.g., web browsing) may experience too many delays due to the limited bandwidth, some form of optimization or content scaling may be advisable for such scenarios.
Article
Full-text available
Satellite networks usually use in‐network methods (such as Performance Enhancing Proxies for TCP) to adapt the transport to the characteristics of the forward and return paths. QUIC is a transport protocol that prevents the use of in–network methods. This paper explores the use of the recently–standardised IETF QUIC protocol with a focus on the implications on performance when using different acknowledgement policies to reduce the number of packets and volume of bytes sent on the return path. Our analysis evaluates a set of ACK policies for three IETF QUIC implementations, examining performance over cellular, terrestrial and satellite networks. It shows that QUIC performance can be maintained even when sending fewer acknowledgements, and recommends a new QUIC acknowledgement policy that adapts QUIC's ACK Delay value based on the path RTT to ensure timely feedback. The resulting policy is shown to reduce the volume/rate of traffic sent on the return path and associated processing costs in endpoints, without sacrificing throughput.
Conference Paper
Full-text available
Originally developed by Google, SPDY is an open protocol for reducing download times of content rich pages, as well as for managing channels characterized by large Round Trip Times (RTTs) and high packet losses. With such features, it could be an efficient solution to cope with performance degradations of Web 2.0 services used over satellite networks. In this perspective, this paper evaluates the SPDY protocol over a wireless access also exploiting a satellite link. To this aim, we implemented an experimental set-up, composed of an SPDY proxy, a wireless link emulator, and an instrumented Web browser. Results confirm that SPDY can enhance the performances in terms of throughput, and reduce the traffic fragmentation. Moreover, owing to its connection multiplexing architecture, it can also mitigate the transport layer complexity, which is critical when in presence of middleboxes deployed to isolate satellite trunks.
Article
Full-text available
The goal of this work was to understand the direction of the emerging web technologies and to evaluate their expected impact on satellite networking. Different aspects have been analysed using both real satellite testbeds and emulation platforms in different test sites in Europe. This analysis included an evaluation of those HTTP/2.0 specifications, which were implemented and released as open‐source code in the experimental release of the SPDY protocol. SPDY performance was evaluated over satellite testbeds in order to understand the expected interaction with performance‐enhancing proxies (including scenarios with a SPDY proxy at a satellite gateway), the impact of security and the effect of satellite capacity allocation mechanisms. The analysis also considered the impact of application protocols and the delay induced by end‐system networks, such as a satellite‐connected WiFi network. Copyright © 2015 John Wiley & Sons, Ltd.
Article
Full-text available
The increasing complexity ofWeb contents and the growing diffusion of mobile terminals, which use wireless and satellite links to get access to the Internet, impose the adoption of more specialized protocols. In particular, we focus on SPDY, a novel protocol introduced by Google to optimize the retrieval of complex webpages, to manage large Round Trip Times and high packet losses channels. In this perspective, the paper characterizes SPDY over high latency satellite links, especially with the goal of understanding whether it could be an efficient solution to cope with performance degradations typically affecting Web 2.0 services. To this aim, we implemented an experimental set-up, composed of an ad-hoc proxy, a wireless link emulator, and an instrumented Web browser. The results clearly indicate that SPDY can enhance the performances in terms of loading times, and reduce the traffic fragmentation. Moreover, owing to its connection multiplexing architecture, SPDY can also mitigate the transport layer complexity, which is critical when in presence of Performance Enhancing Proxies usually deployed to isolate satellite trunks.
Conference Paper
Full-text available
HTTP has been a great success, used by many applications and provided a useful request / response paradigm. We set out to answer two questions: What kinds of purposes is HTTP used for? What is actually transmitted over HTTP? Using full-payload data we are able to give answers to the two questions and a historical context conducting analysis over a multi-year period. We show that huge increase in http for non-browsing – notably web applications, news feeds and IM have occurred and give a quantitative analysis.
Chapter
Google proposed the new application-layer protocol named SPDY with the purpose of complementing problems of HTTP/1.1 to improve web speed. In this paper we evaluate the SPDY protocol’s performance in a variety ofmobile environment to examine the characteristics of the SPDY protocol and compare the differences between the existing protocol and SPDY protocol. Also through this performance evaluation, we analyze the problem of SPDY and propose directions for improvement of this protocol.
Article
Despite its widespread adoption and popularity, the Hypertext Transfer Protocol (HTTP) suffers from fundamental performance limitations. SPDY, a recently proposed alternative to HTTP, tries to address many of the limitations of HTTP (e.g., multiple connections, setup latency). With cellular networks fast becoming the communication channel of choice, we perform a detailed measurement study to understand the benefits of using SPDY over cellular networks. Through careful measurements conducted over four months, we provide a detailed analysis of the performance of HTTP and SPDY, how they interact with the various layers, and their implications on web design. Our results show that unlike in wired and 802.11 networks, SPDY does not clearly outperform HTTP over cellular networks. We identify, as the underlying cause, a lack of harmony between how TCP and cellular networks interact. In particular, the performance of most TCP implementations is impacted by their implicit assumption that the network round-trip latency does not change after an idle period, which is typically not the case in cellular networks. This causes spurious retransmissions and degraded throughput for both HTTP and SPDY. We conclude that a viable solution has to account for these unique cross-layer dependencies to achieve improved performance over cellular networks.
Article
Existing context-aware systems focus only on characterizing the situation of an entity to exhibit the advantage of contextual information association, but they have no mechanism to facilitate the interoperation and reuse of contextual information. Cloud computing offers an adaptable and flexible solution for existing context-aware applications, integrating Mobile Web 2.0 technologies. This work presents a multilayer context cloud framework (MCCF) that integrates Web 2.0 technologies into a mobile context-aware system for use in a cloud computing environment. The proposed MCCF includes a context sensor layer, a context information layer, a context service layer, a context representation layer, a cloud computing layer, and a mobile Web 2.0 context-aware Software as a Service layer. To demonstrate the feasibility of this approach, a Mobile Web 2.0-based context-aware Software as a Service platform, which is a cloud computing application based on MCCF, is implemented to provide continuous and context-aware monitoring of a specific application. Copyright © 2011 John Wiley & Sons, Ltd.
Article
Unequal error protection systems are a popular technique for video streaming. Forward error correction (FEC) is one of error control techniques to improve the quality of video streaming over lossy channels. Moreover, frame-level FEC techniques have been proposed for video streaming because of different priority video frames within the transmission rate constraint on a Bernoulli channel. However, various communication and storage systems are likely corrupted by bursts of noise in the current wireless behavior. If the burst losses go beyond the protection capacity of FEC, the efficacy of FEC can be degraded. Therefore, our proposed model allows an assessment of the perceived quality of H.264/AVC video streaming over bursty channels, and is validated by simulation experiments on the NS-2 network simulator at a given estimate of the packet loss ratio and average burst length. The results suggest a useful reference in designing the FEC scheme for video applications, and as the video coding and channel parameters are given, the proposed model can provide a more accurate evaluation tool for video streaming over bursty channels and help to evaluate the impact of FEC performance on different burst-loss parameters. Copyright © 2014 John Wiley & Sons, Ltd.
Conference Paper
Web page load time is a key performance metric that many techniques aim to reduce. Unfortunately, the complexity of modern Web pages makes it difficult to identify performance bottlenecks. We present WProf, a lightweight in-browser profiler that produces a detailed dependency graph of the activities that make up a page load. WProf is based on a model we developed to capture the constraints between network load, page parsing, JavaScript/CSS evaluation, and rendering activity in popular browsers. We combine WProf reports with critical path analysis to study the page load time of 350 Web pages under a variety of settings including the use of end-host caching, SPDY instead of HTTP, and the mod pagespeed server extension. We find that computation is a significant factor that makes up as much as 35% of the critical path, and that synchronous JavaScript plays a significant role in page load time by blocking HTML parsing. Caching reduces page load time, but the reduction is not proportional to the number of cached objects, because most object loads are not on the critical path. SPDY reduces page load time only for networks with high RTTs and mod pagespeed helps little on an average page.