Conference PaperPDF Available

Impact of Multipath in Mobile Backhaul Savings for ICN Architectures: An Evaluation Using ndnSIM

Authors:

Abstract and Figures

Current video streaming demands are motivating research on cost-efficient solutions for distributing such large traffic amount. Information-Centric Networking (ICN) is a relevant new paradigm that can inherently benefit from multipath transport. This work contributes specifically evaluating the impact of multipath transport in ICN deployments with and without cache, assessing its capacity to alleviate bottlenecks in the radio access network back-haul links. Previous related work has evaluated cache and multipath techniques jointly used, but in this work new insights are provided regarding how multipath functionality alone already impacts in a positive way ICN deployments. Another contribution of this work is evaluate cache deployment in various aggregation nodes in realistic mobile operator inspired scenario, showing how its location influences in end-to-end delay reduction. Evaluation is performed using the open source ndnSIM simulator and, e. g. , indicates that backhaul savings originated by multipath deployment are relevant even without cache, and also assesses cache deployment in intermediate aggregation nodes.
Content may be subject to copyright.
Impact of Multipath in Mobile Backhaul Savings for ICN
Architectures: An Evaluation Using ndnSIM
Silvia Lins1, Lian Araujo 1, Andrey Silva1, Neiva Fonseca2and Aldebaro Klautau1
1LASSE - 5G and IoT Research Group
Universidade Federal do Par´
a (UFPA)
Bel´
em – PA – Brazil
2Ericsson Research
Stockholm, Sweden
{silvialins, ivanes, andreysilva, aldebaro}@ufpa.br
neiva.lindqvist@ericsson.com
Abstract. Current video streaming demands are motivating research on cost-
efficient solutions for distributing such large traffic amount. Information-
Centric Networking (ICN) is a relevant new paradigm that can inherently ben-
efit from multipath transport. This work contributes specifically evaluating the
impact of multipath transport in ICN deployments with and without cache, as-
sessing its capacity to alleviate bottlenecks in the radio access network back-
haul links. Previous related work has evaluated cache and multipath techniques
jointly used, but in this work new insights are provided regarding how multipath
functionality alone already impacts in a positive way ICN deployments. Another
contribution of this work is evaluate cache deployment in various aggregation
nodes in realistic mobile operator inspired scenario, showing how its location
influences in end-to-end delay reduction. Evaluation is performed using the
open source ndnSIM simulator and, e. g. , indicates that backhaul savings
originated by multipath deployment are relevant even without cache, and also
assesses cache deployment in intermediate aggregation nodes.
1. Introduction
Mobile traffic is expected to grow 53 % from 2015 to 2020 [Cisco 2015]. To cope with this
traffic growth, ICN architecture can empower content distribution and enhance network
performance as much as end-user experience. Some studies [Yi et al. 2013] indicate that
multipath, a concept that overlaps with network forwarding strategies, is a key enabler for
the benefits provided by the ICN networks. Therefore, it would be relevant to evaluate
multipath impact in mobile networks correlating it to the use of a specific forwarding
scheme.
ICN considers a content, or a name, as the core of the Internet protocol stack,
replacing the IP address field [Zhang et al. 2014]. This solution removes the anchor be-
tween information and host (maintained by the IP architecture since its beginning) per-
forming content distribution in a much more efficient way, and deploying cache and mul-
tipath natively [de Brito et al. 2013]. Several cache studies were addressed in ICN litera-
ture, but optimized cache deployment is still far from being a trivial task [Yi et al. 2013]
due to its interplay with multipath and congestion control functions for example. It also
depends on the network topology, traffic and several other aspects.
Such indications that gains related to the cache usage in ICN are also related
to other ICN features, as well as network scenario characteristics, are present in some
state of the art works, but they do not specify which functionality gives the most ex-
pressive contributions: According to [Imbrenda et al. 2014], caching a negligible amount
of memory on customer premises reduces the load in the access network by 25%. In
[Carofiglio et al. 2015], customized traffic measurements obtained from a mobile opera-
tor scenario implementing multipath, showed that traffic can be reduced from 60% to 95%
during the peak hour by using few GBs of memory in network equipment. On the other
hand, in [Fayazbakhsh et al. 2013] it is established that the optimistic best-case improve-
ment that ICN can provide is 17% over the simple edge-caching architecture (considering
all metrics). Such result variations indicate that studies indicating specific multipath ben-
efits in ICN caching are still missing.
To the best of the authors’ knowledge, ICN open literature does not inform, for
example, a percentage of traffic reduction solely obtained with the addition of caching or
multipath separately or if there are advantages in placing cache on intermediate (aggrega-
tion) nodes. Also, specific savings for the Mobile Backhaul (MBH) links, i.e. links near
access connecting macro sites with the first aggregation point, are so far unclear. Such
mobile backhaul link is “a potentially very damaging bottleneck if it lacks the required
capacity” according to [Paolini 2011], and corresponds to approximately 30% of the total
network operational costs [Skyfiber 2013].
Other relevant aspect in ICN architecture simulations is the choice of the
simulation tool. Several open-source softwares are currently available, an ndnSIM
[Mastorakis et al. 2015] is the most used among them [Tortelli et al. 2016]. In
[Tortelli et al. 2016] it is stated that almost 2/3 of the presented results in the area are
not reproducible because either the authors have not specified the tool used for evaluation
(20%) or because they used a custom simulator (40%), which is the case of cache stud-
ies [Imbrenda et al. 2014] and [Carofiglio et al. 2015] aforementioned. In this respect,
these studies are complemented here, where ndnSIM simulator was chosen to perform
simulations and make this work results more easily checked. With a scenario inspired by
a mobile operator topology implemented in a tree-like topology [Carofiglio et al. 2015],
this paper evaluates and details the multipath impact and benefits in ICN cache deploy-
ments, showing which percentage of traffic reduction was solely obtained with multipath
functionality addition and which was achieved by enabling cache in different network
nodes. This paper also specifies the gains related to the macro sites backhaul link and
assess the best cache location deployment options.
The next sections are organized as follows: in Section 2, the ICN paradigm is
described and has its advantages and disadvantages contrasted with IP architectures. Sec-
tion 3 details the multipath strategy adopted. In Section 4, the chosen simulation scenario
is detailed, specifying the traffic models used as well as the simulation parameters. Sec-
tion 5 presents and discusses the results. Section 6 concludes the work and provides the
expected next steps in this research.
2. ICN Overview
ICN - Information Centric Networking [Xylomenos et al. 2014] is a proposal to deploy
Internet architecture in a way that it removes the anchor between the content and the
host imposed by IP. CCN (Content Centric Networking) [Jacobson et al. 2009] and NDN
(Named Data Networking) [NDN 2014] are currently the main drivers in the ICN research
area for ICN architectural deployment.
First CCN article was published in 2009 [Jacobson et al. 2009] and proposed an
architecture that could allegedly achieve performance and security while easily scale,
when compared to IP. As already mentioned, IP architecture is currently making use of
several add-ons that increase network topology cost and complexity, like CDN servers
and middle-boxes. NDN proposes changing the IP field to a content field, which is very
adjustable and can even be layered over IP itself for compatibility purposes.
Information exchange in CCN/NDN architectures comprises two main structures:
data packets and interest packets. Both packet structures carry a name that identifies the
content, and the communication is mainly operated by the receiver node, or “consumer”.
The process happens as follows: The consumer inserts the name that identifies the
content or information piece it wants to retrieve from the network inside an interest packet
and sends to the network. After forwarding the interest, consumer defines a timer to wait
for a response from the network. If nothing is received, it resends the interest and resets
the timer. The network node receiving the interest first performs a search in its “content
store” (CS), i.e. a content cache. If found, the content is directly sent back to the node or
interface that requested it.
If the content is not found, the router checks the “Pending Interest Table”, PIT,
which is responsible to record the interfaces from which interest requests were received.
If an existing entry is found in the PIT for the same content packet, the router just stores
this new interface request referent to the same packet. Each Interest that arrives has also
an associated lifetime. If it expires, the PIT entry is removed. If no entry is found in
the PIT related to this content, the router records in the PIT this new entry associating
interest request and interface and forwards this interest request to another router through
the “Forwarding Information base” (FIB) structure. FIB records output interfaces (maybe
multiple) for content packets. The interest is forwarded to each recorded output interface
that may reply with the content packet.
This process is repeated until a node with the requested content is found. The
content is sent back through the same path used to forward the corresponding interest,
until it reaches the consumer (or the consumers) that requested the content.
Since NDN benefits from an adaptive forwarding plane, its various forwarding
strategies allow measurement of QoS metrics and can change content routes according to
congestion status for example. The concept of forwarding strategies frequently overlaps
with the definition of multipath [Yi et al. 2013], and it is relevant to discuss the different
forwarding strategies currently available for NDN networks.
3. NDN Forwarding Strategies
Forwarding in NDN is said to be adaptive because in the beginning, routers just define
the interfaces available to send interests and their preference or priority of use, but this
information will be updated as soon as the network starts sending and receiving pack-
ets. Based on interface providing smaller average RTT statistics for example, FIB helps
adapting interface preferences. The metric used for interface preference ranking depends
on the chosen forwarding policy, which acts according to the information stored in FIB
structures. Forwarding strategies can be used also to avoid congestion for example when
imposing limits at the amount of Interests that can be forwarded per face.
Each NDN forwarding scheme might be more suitable for a certain application,
e.g., Dynamic Adaptive S treaming [Rainer et al. 2016] or Quality of Service in gen-
eral [Kerrouche et al. 2016].
Besides the schemes above, several forwarding strategies were already proposed
for ICN networks [Li et al. 2016]. Among them, in this work a Pending Interest (PI)
scheme was used, based on [Carofiglio et al. 2013], for its relevance and use in previ-
ous works [Carofiglio et al. 2015], [Nguyen et al. 2015]. For this strategy, considering a
given router, when an interest packet must be forwarded, for each face n listed in its FIB
for the according prefix, there is a weight wnassociated with it, defined as:
wn= 1/Pn(1)
where Pnis the number of pending interests for face n. The final resolution of
which face will be used to forward an interest will be made probabilistically, according to
the weight of each face. This multipath feature was not ready available in the simulator
and was implemented in ndnSIM. The algorithm was mainly implemented in the ndnSIM
Forwarding Strategy block. If more information from the algorithm implementation is
needed, reference [Carofiglio et al. 2013] should be consulted.
In summary, forwarding strategies and multipath concepts are interdependent, and
understanding one of them implies understanding the other as well. This work evaluates
the multipath gains isolated and together with cache located in different network nodes.
Section 4 will provide further details regarding the scenarios implemented in ndnSIM
simulator.
4. Scenarios Description
Targeting performance evaluation of NDN architecture in the presence of multipath and
cache functionalities, some specific scenarios were implemented using the ndnSIM sim-
ulator. All scenarios are based on a tree-like topology [?] [?], inspired by a real mobile
operator deployment defined in [Carofiglio et al. 2015] and shown in Figure 1(a).
4.1. Baseline Scenario
Baseline cenario comprises 20 UEs (User Equipments) per macro site, being five macro
sites in total. Links capacities between routers are defined as shown in Figure 1(a),
and are also summarized in Table 1: Wireless links between UEs and macro sites have
1 Gbps while macro sites backhaul links (i.e. links that interconnect the base stations
to the L3 routers) provide 400 Mbps. Connections between Level3 and Level2 routers
are 500 Mbps links, connections between Level2 and Level1 routers are 1 Gbps links,
connections between PGW and L1 are 2 Gbps links and the link interconnecting PGW to
the content server provides 30 Gbps of bandwidth. This baseline is used as a reference
for all simulation setups, and does not consider cache existence neither multipath imple-
mentation, since multipath routing in IP access networks is not yet widely deployed in
practice [Gurtov and Polishchuk 2009]. OSPF is used in this IP baseline scenario as the
default routing protocol.
Table 1. Simulation Parameters: Link Capacities
Connection Capacity
UE – Macro sites 1 Gbps
Macro sites – Level3 Routers 400 Mbps
Level3 Routers – Level2 Routers 500 Mbps
Level2 Routers – Level1 Routers 1 Gbps
Level1 Routers – PGW 2 Gbps
PGW – Content Server 30 Gbps
The baseline scenario has two flavors:
It is modeled first as an IP-Based network, using ns-3 simulator, and does not
implement cache neither multipath functionalities. It does not include any ICN
property, and targets content distribution from the server to the UEs. It will be
referred throughout this work as the “CDN” scenario, and was simulated to be
used as a reference for end-to-end delay and throughput performance comparison
with ICN scenarios. All results comparison are discussed in Section 5.
It is also modeled as an ICN baseline scenario using ndnSIM, with all ICN prop-
erties described in Section 2. To generate several simulation scenarios in order to
provide the desired comparisons, this ICN setup will be used as a base for other
implementations as described in the sequel.
The following scenarios are derived implementations that take into consideration
the ICN baseline provided above. They do not change links capacities configuration or
network nodes location, only differing from one to another regarding the strategy adopted
for cache placement and the existence or not of multipath features.
4.2. Multipath evaluation without cache deployment
First simulation comparison setup (Scenario 1) implements multipath functionality in the
baseline provided by Section 4.1. In Scenario 1, nodes do not perform cache and multi-
path is enabled, implemented as described in Section 3. Obviously, only nodes that have
two or more connections are able to use multiple paths to send and receive information,
which according to Figure 1(a) excludes base station nodes, which have only one back-
haul connection, and the content server. PGW use both links to forward packets to L1
routers and L1 routers also divide output traffic among the three links that interconnect
them to the Level2 routers. Level2 routers are able to do the same with the links that
connect them to the L3 routers. As already mentioned, the multipath adopted to distribute
the traffic among the available interfaces for each router is detailed in Section 3 and is
also depicted in red for better visualization in Figure 1(b).
Scenario 1 performance will be further compared in Section 5 with the ICN Base-
line, without cache or multipath funcionalities (i.e. an ICN singlepath scenario), and these
(a) Baseline scenario implemented in ns-3 and in
ndnSIM, inspired in a real mobile network deploy-
ment.
(b) Scenario 1: multipath implemented in ndnSIM,
for all routers with two or more connections.
Figure 1. Baseline scenario and multipath scenario implemented for simulations.
results are also contrasted with the CDN baseline, which is IP-Based. These comparisons
will enable assessment of multipath gains independently from cache advantages.
To evaluate cache gains in ICN deployments, two other scenarios were derived
from the ICN baseline provided in 4.1: cache deployment without multipath (4.3), and
cache deployment with multipath enabled(4.4).
4.3. Cache strategy evaluation: without multipath
Regarding cache placement evaluation, Scenario 2 is implemented considering only the
existence of cache, without enabling multipath. Scenario 2 is further classified in four
setups, each one deploying cache in different nodes. The same amount of cache is
used in all scenarios, which corresponds to 1% of total content available for download
[Li et al. 2012], as explained in the sequel:
Scenario 2.1: In this setup, all cache is placed in the PGW node.
Scenario 2.2: Cache is placed in R1 nodes only, divided equally among the L1
routers.
Scenario 2.3: All cache placed in base station nodes, divided equally among them.
Scenario 2.4: Cache is distributed in the nodes as follows: like in other cache
scenarios, 1% of cache is assumed in the network. From this 1%, 4x more cache
is placed in R3 and R4 nodes (i.e. 20000 packets), divided equally among levels
and router nodes (10000 packets in L3, being 2500 packets per R3 router, and
10000 packets in base stations level, with 2000 packets per base station).
These setups were implemented to have its results contrasted with each other,
evaluating what is the optimal cache placement setup for the targeted scenario. Its end-
to-end delay and throughput statistics are also compared with the CDN baseline scenario,
assessing how cache placement can impact positively backhaul bandwidth savings, as
well as application latency. All setup for scenarios 2 are summarized in Figure 2.
4.4. Cache strategy evaluation: with multipath
The same scenarios setup from Section 4.3 were modeled with multipath activated, tar-
geting the evaluation of cache and multipath functionalities jointly. Let’s call scenarios
Figure 2. Scenario 2: Scenario designed for cache evaluation purposes, without
multipath. Four different cache setups were implemented in ndnSIM.
for cache strategy evaluation with multipath as Scenario 3. They are shown in Figure 3
and its derived setups will be:
Scenario 3.1: In this setup, all cache is placed in the PGW node, with multipath
activated.
Scenario 3.2: Multipath is also activated for this scenario, where cache is placed
in R1 nodes only, divided equally among the L1 routers.
Scenario 3.3: All cache placed in base station nodes, divided equally among them.
Multipath is also enabled (for nodes with two or more connections, as expected).
Scenario 3.4: Cache is distributed in the nodes as follows: like in other cache
scenarios, 1% of cache is assumed in the network. From this 1%, 4x more cache is
placed in R3 and R4 nodes (i.e. 20000 packets), divided equally among levels and
router nodes (10000 packets in L3, being 2500 packets per R3 router, and 10000
packets in base stations level, with 2000 packets per base station). Multipath is
also active for all nodes with multiple connections.
In total, 11 simulation runs are performed to compare different cache and multi-
path setups and isolate the gains associated with each one of them. First, ICN (singlepath)
without cache and CDN/IP scenarios are compared (ICN Baseline vs. CDN/IP Baseline).
Then, multipath is activated (Scenario 1) and compared with ICN Baseline. Next, Sce-
nario 2 and Scenario 3 with different cache placements are simulated, and have its results
compared with each other.
Figure 3. Scenario 3: Scenario designed for cache evaluation purposes, with
multipath. Four different cache setups with multipath enabled were implemented
in ndnSIM.
4.5. Traffic Modeling
Users demand traffic from the content server, which provides a variety of 10,000 con-
tents each. 250 chunks of 4096 bytes compose each content, and users request contents
according to an exponential distribution with a mean of 0.08 contents/s. Content pop-
ularity is modeled as a Weibull distribution with shape 0.8 and scale 500, as proposed
by [Imbrenda et al. 2014]. For a given time T between two contents, users will request
content chunks in a rate of 250/T chunks per second. In all scenarios, cache replace-
ment policy adopted for the scenarios containing cache is LRU (Least Recently Used)
[O’neil et al. 1993]. All simulation parameters are summarized in Table I.
Such traffic model assume that users are constantly retrieving content from the
servers rather than sending information, which mimics current behavior for streaming
applications for example. Results for the described simulation setups are discussed in the
following section.
5. Results
First, the CDN/IP baseline scenario is simulated with the parameters provided by Sec-
tion 4. In this scenario, multipath is not natively deployed and end-to-end connections
are established between the server and UEs to send and receive information. Simula-
tion results are compared with the ones from the ICN baseline scenario, which still does
not assume multipath functionality but implements the information-centric paradigm for
content distribution, as explained in Section 2. Multipath is then activated for the ICN sce-
Table 2. Simulation Parameters: Traffic Modelling
Parameter Value
Contents per CDN server 10000
Active users per base station 20
Base stations per scenario 5
Chunks per content 250
Chunk size (Bytes) (%) 4096
Users content request rate modeling Exponential distribution with mean
of 0.8 contents/sec
Content size (KB) 1024
Content amount requested per UE
in total
100 contents
Cache replacement policy LRU
Content Popularity modeling Weibull distribution (shape = 0.8,
scale = 500)
nario and its results are contrasted with the ones obtained in both IP and ICN baselines,
as shown in Figure 4(a).
(a) Macro site and PGW backhaul through-
put in CDN/IP vs. ICN.
(b) Average, maximum and minimum end-to-end de-
lay statistics in CDN/IP vs ICN Singlepath and ICN
Multipath scenarios (without cache).
Figure 4. CDN vs. IP scenarios: Throughput and End-to-End delay results.
Results in Figure 4(a) evaluate multipath influence in backhaul traffic reduction.
It assesses how backhaul savings can be correlated with multipath activation even in ICN
scenarios without any cache, and how it compares with an IP-Based (CDN) scenario that
does not implement cache neither multipath. As already stated, cache is not considered
here because the intention is to isolate multipath gains and evalulate if they alone are
worthy, and for the CDN scenario multipath is not present at all since it is not a native
neither trivial function specially for IP-based networks.
First result observed from Figure 4(a) indicates that ICN itself without multi-
path (singlepath case) already provides around 26% of savings in Macro Site backhaul
throughput (103 Mbps in the singlepath scenario against 139.96 Mbps in the CDN/IP sce-
nario). This is due to the content dissemination paradigm adopted by ICN, where there is
no need to establish several unique end-to-end connections between UEs and the server as
IP does. Interests for the same content sent by different users are updated in the pending
interest table located in the router (and/or macro site) nodes and when the content arrives
in the node, it is only copied and forwarded to the different outgoing ports (or users) that
requested the same interest/content.
This positive impact is also reflected in end-to-end delay statistics shown in Fig-
ure 4(b). Even without cache, in both CDN and ICN scenarios the content popularity
is modeled as a Weibull distribution as explained in Section 4.5. It allows (for the ICN
paradigm) interest packet aggregation, which reduces delay specially in popular content
requests.
Imagine a user A sending an Interest for a content P. Considering P as a popular
content, users B, C and D also request P, before it arrives in A. In ICN scenarios, as soon
as intermediate nodes identify several interests for the same content, they forward to the
content server only the first request, from A, recording B, C and D as interfaces to forward
the content as soon as it comes back from the server. By reducing server load as well as
intermediate link demands, ICN scenarios avoid congestion occurance and also reduce
end-to-end delay statistics specially for nodes B, C and D, since when they send requests
for the same content, P was already in its way back from the server (because user A sent
the request before).
In Figure 4(b), average end-to-end delay for the ICN scenarios stayed around
75 ms, while in CDN/IP scenarios it achieved almost 600 ms. It represents a delay reduc-
tion close to 87% for both singlepath and multipath scenarios, showing that even without
investing in cache deployment ICN could be a good alternative for time-sensitive appli-
cations.
Considering cache location in ICN scenarios, different cache locations were sim-
ulated as detailed in Section 4.3. First, cache evaluation was performed without multipath
functionality, running Scenarios 2.1 to 2.4 and comparing its end-to-end delay statistics.
For the singlepath scenarios, Figure 5(a) shows that there was not considerable difference
among the obtained results, only a slight advantage of 1 ms average delay reduction when
placing cache in R1 nodes (Scenario 2.2). As already mentioned in Section 4.3, it is worth
noticing that the same amount of cache was simulated in all scenarios, changing only the
nodes in which cache was inserted.
(a) End-to-end delay statistics in ICN singlepath sce-
narios with the same amount of cache placed in dif-
ferent nodes.
(b) End-to-end delay statistics in ICN multipath sce-
narios with the same amount of cache placed in dif-
ferent nodes.
Figure 5. ICN Singlepath vs. ICN Multipath: End-to-End delay results.
Regarding multipath simulations, Figure 5(b) also shows that no relevant advan-
tage was obtained by changing cache location in simulated scenarios 3.1 to 3.4 (only 1 ms
reduction in average end-to-end delay for scenario 3.2). When contrasting singlepath ver-
sus multipath scenarios with cache activated, in Figure 6 reveals that around 8% of latency
can be saved when placing cache in R1 level nodes (i.e. a reduction from 76 ms to 70 ms
in average).
But going back to the results in Figure 4(b), it is clear that most of the gains are
provided by the multipath activation only. Only 2 ms reduction could be assigned to the
cache usage, since 72 ms average delay is already obtained by multipath activation only,
depicted in Figure 4(b), and when activating cache in R1, Figure 6 shows that it only
reduces to 70 ms.
Figure 6. Average, maximum and minimum end-to-end delay statistics in ICN
cache scenarios 2.2 and 3.2 (cache in R1 nodes), showing singlepath vs. multi-
path statistics.
Cache hit rate statistics are able to confirm that multipath has a much more relevant
role in end-to-end delay reduction than cache location. Figure 7(a) reveals that even
placing all cache in PGW node (Scenarios 2.1 and 3.1), for both singlepath and multipath
simulations only 11% of cache hit rate was obtained. When spreading cache among all
router levels, less than 1% of cache hit was obtained, and when concentrating it in base
stations (Scenario 3, cache in R4) only 2% of cache was used to respond to users requests.
Other previous cache studies [Carofiglio et al. 2015] showed more positive impact
of cache and multipath functionalities in ICN scenarios, but the amount of cache used in
these studies was considerably high when compared to total network available content:
In [Carofiglio et al. 2015], 20,000 contents are available for retrieval by the users, and
each content is divided in 250 chunks of 4096 bytes each, representing approximately
a total of 20 GB. The authors in [Carofiglio et al. 2015] assumed 6 GB of cache, i.e.
30% of the total available content. This value can be considered large, given the current
dimension of Internet. Here, the cache size represents only 1% of the amount of content
available for retrieval, as suggested by other study [Li et al. 2012]. This allows to infer
about the impact of this assumption.
Throughput statistics also reveal the relevance of multipath functionality in band-
width savings when compared to cache addition. When placing all cache amount in
(a) Cache hit rate results for all singlepath and mul-
tipath scenarios.
(b) Average backhaul throughput results (connec-
tions between R3 nodes and Base Stations (R4)) for
all singlepath and multipath scenarios.
Figure 7. Cache hit rate and backhaul throughput results for all singlepath and
multipath scenarios.
R4 (base station) nodes, backhaul throughput demands around 66 Mbps if multipath is
activated, contrasting with more than 100 Mbps if only singlepath is enabled. But a
close enough result was already obtained when simulating ICN scenarios with multipath
only, without any cache addition, as already showed in Figure 4(a) and also depicted in
Figure 7(b). In brief, singlepath ICN scenarios demand around 100 Mbps in backhaul
throughput, while multipath scenarios (without cache) demand 67 Mbps. It already repre-
sents a reduction of 34% in backhaul bandwidth demands, and when activating cache (if
multipath is already active) it only helps saving 1 Mbps more in the best case (reducing
from 67.33 Mbps in ICN multipath without cache scenario to 66.4 Mbps in ICN multipath
with cache in base stations - Scenario 3.3.)
6. Conclusions and Next Steps
This paper evaluated through ndnSIM network simulations how multipath functionality
itself could provide, even without any cache deployment, considerable backhaul savings
in information centric networks. Gains were assessed regarding macro site backhaul sav-
ings as well as bandwidth usage reduction in backhaul network links and end-to-end delay
statistics. Cache hit rate was also contrasted among different cache deployments in ICN
scenarios. The first results compared ICN scenarios without cache, with and without mul-
tipath functionality, with a CDN/IP-based network containing the same traffic demands
and topology (and also without cache).
Results of such comparison revealed that, regarding backhaul demand savings,
the ICN content distribution paradigm even without multipath already provides savings
around 26% in macro sites throughput and in PGW backhaul when compared to CDN
architectures. But the main focus here is to conclude that considering an ICN with mul-
tipath deployment, without any cache addition, savings for the observed scenarios are
already valuable when compared to ICN singlepath and even more with CDN/IP-Based,
increasing gains to 52% in the macro site backhaul and 48% in the PGW backhaul when
compared to CDN.
Regarding cache location, results indicated that if realistic cache amounts (i.e. 1%
of total available content [Li et al. 2012]) are used, cache impact in both end-to-end delay
and backhaul throughput reduction is not relevant when compared to multipath benefits.
A relevant aspect that is not tackled here and figures as future work is to evaluate multi-
path algorithms efficiency and computational costs, in order to provide a feasible realistic
solution. Another open issue is to analyze the same aspects in congested networks, but
it requires a congestion control algorithm for ICN, which is currently an open research
question [Chai et al. 2013].
Another relevant aspect to be considered in future works is traffic modeling adap-
tation for next decade trends. So far, traffic modeling standards considers that the traffic
from the network towards the user is still much larger than the traffic in the opposite di-
rection, i.e. the uplink. As mentioned, the main objective with the simulations is to assess
gains specific related to multipath functionality, and also to evaluate the advantages re-
lated to backhaul savings by doing cache deployment in different aggregation nodes. but
future trends indicate that this behavior may change since users are already uploading
more than 350 millions of photos per day on Facebook. Future mobile network trends
where users also upload lots of contents (rather than only consuming content from the
network) are also predicted by [Cisco 2015]. This change in user behavior will probably
impact the cache strategies and other assessments with updated traffic models certainly
will be needed.
References
Carofiglio, G., Gallo, M., Muscariello, L., Papalini, M., and Wang, S. (2013). Optimal
multipath congestion control and request forwarding in information-centric networks.
In Network Protocols (ICNP), 2013 21st IEEE International Conference on, pages 1–
10. IEEE.
Carofiglio, G., Gallo, M., Muscariello, L., and Perino, D. (2015). Scalable mobile back-
hauling via information-centric networking. In Local and Metropolitan Area Networks
(LANMAN), 2015 IEEE International Workshop on, pages 1–6. IEEE.
Chai, W. K., He, D., Psaras, I., and Pavlou, G. (2013). Cache “less for more”
in information-centric networks (extended version). Computer Communications,
36(7):758–770.
Cisco, V. N. I. (2015). Forecast and methodology, 2014-2019 white paper. Technical
Report, Cisco, Tech. Rep.
de Brito, G. M., Velloso, P. B., and Moraes, I. M. (2013). Information Centric Networks:
A New Paradigm for the Internet. John Wiley & Sons.
Fayazbakhsh, S. K., Lin, Y., Tootoonchian, A., Ghodsi, A., Koponen, T., Maggs, B., Ng,
K. C., Sekar, V., and Shenker, S. (2013). Less pain, most of the gain: Incrementally
deployable icn. In ACM SIGCOMM Computer Communication Review, volume 43,
pages 147–158. ACM.
Gurtov, A. and Polishchuk, T. (2009). Secure multipath transport for legacy internet
applications. In 2009 Sixth International Conference on Broadband Communications,
Networks, and Systems, pages 1–8.
Imbrenda, C., Muscariello, L., and Rossi, D. (2014). Analyzing cacheable traffic in isp
access networks for micro cdn applications via content-centric networking. In Pro-
ceedings of the 1st international conference on Information-centric networking, pages
57–66. ACM.
Jacobson, V., Smetters, D. K., Thornton, J. D., Plass, M. F., Briggs, N. H., and Braynard,
R. L. (2009). Networking named content. In Proceedings of the 5th international
conference on Emerging networking experiments and technologies, pages 1–12. ACM.
Kerrouche, A., Senouci, M. R., and Mellouk, A. (2016). QoS-FS: A new forwarding
strategy with QoS for routing in Named Data Networking. In Communications (ICC),
2016 IEEE International Conference on, pages 1–7. IEEE.
Li, J., Wu, H., Liu, B., and Lu, J. (2012). Effective caching schemes for minimizing inter-
ISP traffic in named data networking. In Parallel and Distributed Systems (ICPADS),
2012 IEEE 18th International Conference on, pages 580–587. IEEE.
Li, M., Lukyanenko, A., Ou, Z., Yla-Jaaski, A., Tarkoma, S., Coudron, M., and Secci,
S. (2016). Multipath transmission for the internet: A survey. IEEE Communications
Surveys Tutorials, vol. PP, (99):1–41.
Mastorakis, S., Afanasyev, A., Moiseenko, I., and Zhang, L. (2015). ndnSIM 2.0: A new
version of the NDN simulator for NS-3. NDN, Technical Report NDN-0028.
NDN (2014). NSF Named Data Networking project. Available: http://www.named-
data.net/ Last accessed: December 2016.
Nguyen, D., Fukushima, M., Sugiyama, K., and Tagami, A. (2015). Efficient multipath
forwarding and congestion control without route-labeling in ccn. In Communication
Workshop (ICCW), 2015 IEEE International Conference on, pages 1533–1538. IEEE.
O’neil, E. J., O’neil, P. E., and Weikum, G. (1993). The LRU-K page replacement algo-
rithm for database disk buffering. ACM SIGMOD Record, 22(2):297–306.
Paolini, M. (2011). An analysis of the total cost of ownership of point-to-point, point-
to-multipoint, and fibre options. White paper on crucial economics for mobile data
backhaul.
Rainer, B., Posch, D., and Hellwagner, H. (2016). Investigating the performance of pull-
based dynamic adaptive streaming in NDN. IEEE Journal on Selected Areas in Com-
munications, 34(8):2130–2140.
Skyfiber (2013). Breaking the Backhaul Bottleneck: Road to Profitable Backhaul. Tech-
nical Report.
Tortelli, M., Rossi, D., Boggia, G., and Grieco, L. A. (2016). ICN software tools: survey
and cross-comparison. Simulation Modelling Practice and Theory, 63:23–46.
Xylomenos, G., Ververidis, C. N., Siris, V. A., Fotiou, N., Tsilopoulos, C., Vasilakos, X.,
Katsaros, K. V., and Polyzos, G. C. (2014). A survey of information-centric networking
research. IEEE Communications Surveys & Tutorials, 16(2):1024–1049.
Yi, C., Afanasyev, A., Moiseenko, I., Wang, L., Zhang, B., and Zhang, L. (2013). A case
for stateful forwarding plane. Computer Communications, 36(7):779–791.
Zhang, L., Afanasyev, A., Burke, J., Jacobson, V., Crowley, P., Papadopoulos, C., Wang,
L., and Zhang, B. (2014). Named data networking. ACM SIGCOMM Computer Com-
munication Review, 44(3):66–73.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Smart devices equipped with multiple network interfaces are becoming commonplace. Nevertheless, even though multiple interfaces can be used to connect to the Internet, their capabilities have not been fully utilized yet because the default TCP/IP stack supports only a single interface for communication. This situation is now changing due to the emergence of multipath protocols on different network stack layers. For example, many IP level approaches have been proposed utilizing tunneling mechanisms for hiding multipath transmission from the transport protocols. Several working groups under IEEE and IETF are actively standardizing multipath transmission on the link layer and transport layer. Application level approaches enable multipath transmission capability by establishing multiple transport connections and distributing data over them. Given all these efforts, it is beneficial and timely to summarize the state-of-the-art, compare their pros and cons, and discuss about the future directions. To that end, we present a survey on multipath transmission and make several major contributions: (1) we present a complete taxonomy pertaining to multipath transmission, including link, network, transport, application and cross layers; (2) we survey the state-of-the-art for each layer, investigate the problems that each layer aims to address, and make comprehensive assessment of the solutions; (3) based on the comparison, we identify open issues and pinpoint future directions for multipath transmission research.
Technical Report
Full-text available
The fundamental departure of the Named-Data Networking (NDN) communication paradigm from the IP principles requires exten- sive evaluation through experimentation, and simulation is a necessary tool to enable the experimentation at scale. We released the first version of ndnSIM, an open source NS-3-based NDN simulator, back in June 2012. Since then, ndnSIM has undergone substantial development resulting in ndnSIM 2.0, which was released in January 2015. This paper reports the design and features of this new simulator version. The goal of the new release is to match the simulation platform to the latest advancements of NDN research. Therefore, it uses the ndn-cxx library (NDN C++ library with eXperimental eXtensions) and the NDN Forwarding Daemon (NFD) to enable experiments with real code in a simulation environment.
Book
Since its inception, the Internet has evolved from a textual information system towards a multimedia information system, in which data, services and applications are consumed as content. Today, however, the main problem faced is that applications are now content-oriented but the protocol stack remains the same, based on the content location. Thus, it is clear that the Internet's current architecture must change. This new architecture should take into account aspects to improve content location and delivery efficiency and also content availability. Fulfilling these requirements is the main goal of information-centric networks (ICNs). ICN is a new communication paradigm to increase the efficiency of content delivery and also content availability. In this new concept, the network infrastructure actively contributes to content caching and distribution. This book presents the basic concepts of ICNs, describes the main architecture proposals for these networks, and discusses the main challenges to their development. Information Centric-Networks looks at the current challenges for this concept, including: naming, routing and caching on the network-core elements, several aspects of content security, user privacy, and practical issues in implementing ICNs.
Conference Paper
Information-Centric Networking (ICN) has seen a significant resurgence in recent years. ICN promises benefits to users and service providers along several dimensions (e.g., performance, security, and mobility). These benefits, however, come at a non-trivial cost as many ICN proposals envision adding significant complexity to the network by having routers serve as content caches and support nearest-replica routing. This paper is driven by the simple question of whether this additional complexity is justified and if we can achieve these benefits in an incrementally deployable fashion. To this end, we use trace-driven simulations to analyze the quantitative benefits attributed to ICN (e.g., lower latency and congestion). Somewhat surprisingly, we find that pervasive caching and nearest-replica routing are not fundamentally necessary---most of the performance benefits can be achieved with simpler caching architectures. We also discuss how the qualitative benefits of ICN (e.g., security, mobility) can be achieved without any changes to the network. Building on these insights, we present a proof-of-concept design of an incrementally deployable ICN architecture.
Article
Adaptive content delivery is the state of the art in real-time multimedia streaming. Leading streaming approaches, e.g., MPEG-DASH and Apple HTTP Live Streaming (HLS), have been developed for classical IP-based networks, providing effective streaming by means of pure client-based control and adaptation. However, the research activities of the Future Internet community adopt a new course that is different from today's host-based communication model. So-called information-centric networks are of considerable interest and are advertised as enablers for intelligent networks, where effective content delivery is to be provided as an inherent network feature. This paper investigates the performance gap between pure client-driven adaptation and the theoretical optimum in the promising Future Internet architecture named data networking (NDN). The theoretical optimum is derived by modeling multimedia streaming in NDN as a fractional multi-commodity flow problem and by extending it taking caching into account. We investigate the multimedia streaming performance under different forwarding strategies, exposing the interplay of forwarding strategies and adaptation mechanisms. Furthermore, we examine the influence of network inherent caching on the streaming performance by varying the caching polices and the cache sizes.
Article
Research interest on Information Centric Networking (ICN) has been sharply growing. Although new architectures, algorithms, and analytical models have been proposed, their evaluation remains often isolated and not rigorously verified by the research community. This paper initially portrays the composition of open source software tools available for ICN, certifying the predominance of Content Centric Networking (CCN)/Named Data Networking (NDN) simulators. Then, inspired by similar works related to the P2P field, it surveys related research papers to qualify the ICN literature produced so far, finding that a large fraction of contributions either uses custom, proprietary, and unavailable software, or even plainly fails to mention any information in this regard. By adopting a rigorous methodology, in the second part of the paper four simulators, namely ndnSIM, ccnSim, CCNPL-Sim, and Icarus, are cross-compared under several key aspects. Our findings confirm both their accuracy with respect to reference theoretical models in simple settings, and their consistency in more complex scenario. Additionally, our analysis can both assist researchers in the choice of the tool that best fits their needs, and provide guidelines to avoid common pitfalls in the ICN performance evaluation.
Article
Wireless operators are increasingly exploring new more flexible communication mediums that can lower its reliance on traditional platforms. Some operators are using Ethernet depending on the physical medium that can support microwave. They are working on several methods, including a mixture of traditional LECT1 circuits, fiber, microwave and hybrid fiber coax. The IP/MPLS Forum and the Metro Ethernet Forum are working with operators to develop a DOCSIS-over-Ethernet standard. They are exploring to use the physical medium for Ethernet that supports cell site backhaul and enterprise requirements. A wireless operator can use TDM and IP traffic over a common transport network for providing efficient transmissions. Telecom Transport Management has developed an underlying TDM infrastructure that can carry either TDM or Ethernet depending on customer needs.
Article
Named Data Networking (NDN) is one of five projects funded by the U.S. National Science Foundation under its Future Internet Architecture Program. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006.(1) The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications for how we design, develop, deploy, and use networks and applications. We describe the motivation and vision of this new architecture, and its basic components and operations. We also provide a snapshot of its current design, development status, and research challenges. More information about the project, including prototype implementations, publications, and annual reports, is available on named-data.net.