ArticlePDF Available

Abstract and Figures

To develop and evaluate diverse Information-Centric Networking (ICN) protocols and applications, large-scale and extensible testbeds that facilitate realistic evaluations must be designed and deployed. We propose a container-based unified testbed for ICN, called CUTEi, which employs the Linux container (LXC) lightweight virtualization mechanism to build testbed nodes. CUTEi enables testbed users to run applications and protocols for ICN in two experimentation modes using two different container designs: (1) application-level experimentation using a “common container” and (2) network-level experimentation using a “user container.” CUTEi also implements an “on-filesystem cache” to allocate caching data on a UNIX filesystem and share the cached data with multiple containers. Thus far, we have deployed the CUTEi testbed on nine sites and performed experiments using CCNx components on the testbed. Consequently, we have found that it is easy to coordinate experiments on CUTEi; further, a comparison of its data fetch performance with that of PlanetLab indicates that CUTEi is more stable for ICN experiments than PlanetLab.
Content may be subject to copyright.
Container-Based Unified Testbed for Information-Centric Networking1
Author 1: Hitoshi Asaeda (Email: asaeda@nict.go.jp), National Institute of Information and
Communications Technology (NICT), Tokyo, Japan
Author 2: Ruidong Li, National Institute of Information and Communications Technology (NICT),
Tokyo, Japan
Author 3: Nakjung Choi, Bell Labs, Alcatel-Lucent, Seoul, South Korea
Abstract
To develop and evaluate diverse Information-Centric Networking (ICN) protocols and applications,
large-scale and extensible testbeds that facilitate realistic evaluations must be designed and deployed. We
propose a container-based unified testbed for ICN, called CUTEi, which employs the Linux container (LXC)
lightweight virtualization mechanism to build testbed nodes. CUTEi enables testbed users to run applications
and protocols for ICN in two experimentation modes using two different container designs: (1)
application-level experimentation using a “common container” and (2) network-level experimentation using a
“user container. CUTEi also implements an “on-filesystem cache” to allocate caching data on a UNIX
filesystem and share the cached data with multiple containers. Thus far, we have deployed the CUTEi testbed
on nine sites and performed experiments using CCNx components on the testbed. Consequently, we have
found that it is easy to coordinate experiments on CUTEi; further, a comparison of its data fetch performance
with that of PlanetLab indicates that CUTEi is more stable for ICN experiments than PlanetLab.
KeywordsTestbed, ICN, CCN, Linux container, CUTEi
I. INTRODUCTION
Information retrieval is at the core of future Internet technology, and therefore Information-Centric
Networking (ICN) [13] holds significant advantages in terms of information sharing efficiency and robustness.
It enables users to obtain contents based on content identifiers (e.g., content name) rather than host addresses
[1–3]. For instance, Content-Centric Networking (CCN) [1] focuses on content dissemination in the network;
data receivers retrieve content by name, using an “interest” packet, and content owners provide content through
the network, using “data” (called content object) packets. In CCN, a router forwards the interest packet by
1 This article appears in IEEE Network, Vol.28, No.6, November 2014.
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all
other uses, in any current or future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or
reuse of any copyrighted component of this work in other works.
looking up its Forwarding Information Base (FIB), which is populated by name-based routing protocols.
According to the functionality of in-network cache, if a router receiving an interest has the requested named
data in its cache, it forwards the data toward the receiver without forwarding the interest to the source host. As
such, implementing this new communication technology requires innovative ideas and diverse research work in
areas such as “naming,” “routing,” “caching,and “content discovery.”
To evaluate novel architectures and protocols, researchers validate their proposals via simulations,
emulations, or on testbeds. It is possible to get valuable information about general trends from simulations and
emulations. However, the technical details that can make the difference are absent in simulations and
emulations; hence, the need for testbeds, which are relatively small scale by definition, but offer information
about actual implementations, such as CCN’s promotive prototype, CCNx [4], running on complex networking
environments.
In this article, we describe the design and implementation of the Container-based Unified TEstbed for iCN
(CUTEi). CUTEi employs the Linux container (LXC) [5] lightweight virtualization mechanism for node design
and enables testbed users to evaluate ICN applications and protocols. It offers two experimentation modes
thanks to different container designs: (1) an application-level experimentation mode that uses a “common
container,” and (2) a network-level experimentation that uses a “user container. In the application-level
experimentation mode, users can simply use pre-installed common ICN applications, implementations, and
libraries in the testbed. In the network-level experimentation mode, users can use pre-installed software, install
and modify ICN software on top of their own user containers, and create their own individual network
topologies using their user containers. CUTEi also implements a content store manager (csmgr) that activates
an on-filesystem cachefor the CCNx forwarding daemon (ccnd) to locate caching data on a UNIX filesystem
on a cheaper disk space. The on-filesystem cache also enables multiple instances of ccnd running on a common
container and/or user container(s) to share the cached contents.
As a first step, we installed testbed nodes on nine sites and conducted experiments using the CCNx
components on CUTEi. We also conducted experiments in which we compared the performance measured for
similar scenarios on both CUTEi and PlanetLab [6].
II. RELATED WORK
To evaluate novel ICN architectures and protocols, researchers validate their designs using simulations,
emulations, or testbeds. Simulations and emulations facilitate analysis of basic behaviors and expected
performances of proposed mechanisms. The experimentation is repeatable, which is good for debugging and
useful for the protocol design phase. However, experiments conducted via simulations and emulations merely
approximate logical network environments, and it is difficult to obtain realistic datasets. For instance,
Mini-CCNx [7], a lightweight implementation that can emulate CCN-based communications on a few hundred
nodes on a laptop computer, is useful for emulating CCNx-based applications. However, it does not include
actual traffic for the experiments and everything is coordinated in user-defined scenarios, which is not always
feasible in the real world.
There are global or application specific testbeds deployed on the Internet that can be used to test proposed
solutions. PlanetLab [6] is one such that has been designed for evaluating networking and distributed systems.
However, PlanetLab requires extra work for users to set up test scenarios and experimental environments,
especially to construct a network or lower layer test environments, because of its inherent architectural
limitations. Moreover, the primary concern associated with PlanetLab is its stability. PlanetLab nodes are
usually heavily loaded and sometimes unstable; consequently, it is difficult to obtain potential performance
metrics in experiments being conducted. Several ICN research projects such as CCN, NDN [8], and PURSUIT
[9] also provide testbeds; however, they are dedicated to evaluating their own implementations, CCNx [4],
NDNx [8], and Blackadder [10], with the required components. Evaluation of alternative implementations on
top of these project-oriented testbeds is not facilitated.
III. REQUIREMENTS
To enable a potentially large number of users to run and evaluate various implementations on it, a global
ICN testbed should satisfy the following requirements:
Reality: Behavior that is as close as possible to those that would actually be observed in real deployments.
Virtualization: Virtualized network and resources to support a large number of users.
Resource and process isolation: Isolation of resource usage and traffic for different testbed users.
Robustness: Robustness of testbed nodes to increase the usability of users’ experiments.
Usability and extensibility: Facilitate experimental environment configuration and setup at a low cost to
test novel protocols, from the network layer to the upper application layer.
ICN-friendly: Accommodate tests for the special features of ICN.
In testbeds, the load on machines and on networks varies on every time scale and, in general, experiments
conducted there are not repeatable. Testbeds should inherit the philosophy outlined above to give conceivable
phenomena and enable users to illustrate the actual behaviors exhibited by real network environments.
Computing and network resources in testbeds are shared with testbed users. However, virtualization
technology enables each user to transparently own server and network resources and creates his/her own
experimental environment including user-dependent network topologies in the testbed.
Resource and process isolation compliments virtualization. Separating the usage of components such as CPU,
disk I/O, network bandwidth, memory, and namespace among testbed users reduces possible interference from
heavy users. Employing resource reservation in a testbed would facilitate the accomplishment of
high-performance experiments.
Users’ experiments should not be repeatedly hampered by testbed instability. Robustness is a key
consideration for testbed deployment because operational difficulties are significant and have a detrimental
effect on experiments. Further, changing the experimental sites or nodes incurs substantial costs.
For maximum usability, an ICN testbed should support a unified method of constructing various existing or
emerging ICN architectures such as CCN, NDN, and PURSUIT. It should also reduce the workload involved in
setting up applications, libraries, and scenarios that testbed users utilize. In addition, the testbed should
facilitate the execution and evaluation of network layer or any proposed routing mechanisms.
The ICN testbed should also accord with ICN features such as in-network caching, name-based forwarding,
and diverse protocols. For instance, in-network caching needs special consideration for its implementation in a
testbed, especially because the simultaneous use of a large amount of cache by several testbed users may affect
the memory usage of the testbed node, which reduces the authenticity of the experimental results.
IV. CUTEI: CONTAINER-BASED TESTBED
We designed the Container-based Unified TEstbed for iCN, CUTEi, as an ICN testbed for evaluating
implementations of ICN architectures, protocols, and applications. This testbed satisfies the requirements
outlined in Section III. It is composed of CUTEi nodes set up in multiple sites. Each CUTEi node runs the
Linux operating system (Ubuntu Desktop 12.04 LTS 64-bit version is used in our environment), and utilizes a
Linux container (LXC) [5] for node virtualization.
A. Architectural Overview
The CUTEi architecture comprises two levels, the OS level and the container level, as depicted in Fig. 1. At
the OS level, a CUTEi node is implemented on a physical machine or a VM on VMware (R), vSphere/ESXi
(R), or a VMware Player (TM) providing resources for the CUTEi testbed. (In this article, we focus on the case
where CUTEi nodes are implemented on VMs.) CUTEi is composed of multiple CUTEi nodes at the OS level.
At the container level, LXC containers are installed on a CUTEi node. CUTEi node resources, such as CPU
and bandwidth, are isolated and allocated to different users.
The core concept underlying the CUTEi architecture is the implementation of the single “common space,”
which is shared by all testbed users, and the “user space,” which comprises the LXC containers assigned to one
testbed user. These two spaces facilitate two experimentation modes such that users can easily conduct their
experiments: application-level experimentation in the “common space” and network-level experimentation in
the “user space.
Application-level experimentation mode is used in scenarios where the desire is to test the major or common
ICN applications using the existing ICN networking protocols. The network-level experimentation mode can
be used to test newly developed ICN networking protocols. These two modes are differentiated primarily for
the convenience of testbed users. Through this design, users intending to conduct experiments at the application
level do not need to perform any additional network-level configurations, while users intending to evaluate
networking protocols can also realize their purposes easily.
Fig. 1 illustrates the operation of both modes at the different levels, using CCNx’s ccnd as an example. The
common space shown in the lower right portion of Fig. 1 facilitates application-level experimentation, whereas
the two user spaces depicted in the upper right portion of Fig. 1 illustrate the network-level experimentation
mode. Common containers with ccnd form an experimental network environment. On the basis of the common
space assigned by central administrators (see Section IV.C), ccnd running on a testbed user’s PC configures the
ccnd neighbors based on his/her experimental scenario. After the user configures ccnd neighbors, s/he runs the
application on the common container from his/her PC(s) to publish/subscribe contents through this common
space.
In Fig. 1, User1 and User2 desire to evaluate their own versions of networking protocols or software on
user-defined CCN topologies. After they acquire the user containers from the site administrators (see Section
IV.C), they form a networking environment specifically for their experiments. As depicted in the upper right
portion of Fig. 1, two networking topologies are deployed separately for User1 and User2. Experiments
conducted at the network level enable users to install and run software or applications on their user containers.
A routing topology can be individually formed based on each user’s demand. This flexibility on free topology
construction in the user space is realized through operations and coordination between pre-established Generic
Routing Encapsulation (GRE) [11] tunnels and private address routing through the CUTEi nodes.
On the other hand, software installation and configuration consume significant amounts of time. CUTEi
enables testbed users to share the major or common ICN software and applications on top of CUTEi nodes. In
addition to OS commands and libraries, CCNx [4], NDNx [8], Blackadder [10] with related components used
for PURSUIT [9], and other software and libraries are currently pre-installed in a Logical Volume (LV) of each
CUTEi node as basic ICN components. NEPI [12], a tool for automating usersexperiments (see next section),
is also installed in the LV. Both common container and user containers are created with snapshot-based LVs
and given copy-on-write clones. If users want to run modified CCNx codes on the testbed, they can install their
own CCNx codes in their user containers and test them in their user spaces.
B. Experimental Setup in User Space
All common containers possess a global IP address to connect with each other or outside networks, while
each user container uses a private IP address (e.g., 10.0.1.2) and a user space is a closed networking
environment. Yet, a user can login to his/her user containers using SSH with his/her certificate, or access them
from PCs connected to the Internet using TUN/TAP devices [13] and SSH. For instance, users can install their
implementation on their user containers and PCs, and create their own CCN topologies with their PCs. The
private address space and GRE tunnels used for its routing are coordinated by central administrators.
Users can easily set up their experimental environments in all their containers in CUTEi nodes using NEPI
[12]. NEPI is a tool for automating experiment description, distributing codes, and executing network
experiments on network evaluation platforms: The user first uses a Python-based language to describe the
experiment as a graph of interconnected components, including CCN nodes, FIB configuration, and CCN
applications. Then, s/he instructs NEPI how to deploy the experiment. In the CUTEi testbed, NEPI uses an
SSH-based backend to automate to run CCN applications (e.g., ccncat) on user containers. During and after
execution of the experiment, the user can ask NEPI to download CCN logs and other collected results from
his/her user containers.
C. Operation and Resource Management
Another challenge of the CUTEi testbed is adjusting its operation and management to support easy setup and
extension for diverse experiments. The CUTEi testbed comprises multiple sites, each of which contributes
CUTEi nodes to the testbed construction. In this testbed, three key players are identified: (1) testbed users, (2)
site administrators, and (3) central administrators.
Testbed users are granted access privileges to the assigned containers in user space. The assigned containers
might be located in different sites, and thus, these key players coordinate to assign operation privileges. The
assignment of a set of user containers to testbed users to conduct experiments is made with the site
administrators’ approval and configuration, and the central administrators allocate a common space topology
and manage the resources of the overall testbed.
More specifically, the first step a user has to take in order to launch an experimental network is to apply
his/her user container from his/her site administrators. If the user wants to obtain more user containers on other
(remote) sites, the user selects the remote sites on the basis of his/her experimental scenarios and asks to create
his/her user containers (after receiving the site administrators’ approval). The selected remote site
administrators then create the user containers for that user and install the user’s public key on their CUTEi
nodes.
The central administrators are eligible to configure the common space. The common space, for instance,
forms the CCN forwarding topology. They also conduct point-to-multipoint topologies using GRE tunnels at
all CUTEi nodes. All private address ranges with appropriate GRE tunnels are specified in the routing tables in
CUTEi nodes, and each user container’s traffic can be routable in the user space. When a user sets up his/her
own CCN forwarding topology for his/her user space, s/he selects ccnd neighbors and transparently selects the
corresponding GRE tunnels by routing algorithms or configurations. Another role of the central administrators
is to allocate and reserve hardware and network resources. Cgroups [14], which has the ability to limit, account,
and isolate such resource usage, contributes to it.
In the current operation, site administrators are responsible for user account and user container creation, both
of which can be done via a given script. The above GRE tunnel configuration is also done via a script whenever
a new CUTEi node is set up. We plan to develop a Web interface through which user accounts and containers
can be created and user space configuration (analogous to the slice configuration on PlanetLab) can be done by
the users themselves in our next phase.
In addition, CUTEi provides “user container replication and failover” functions that enhance the robustness
of the testbed. In the testbed site, the site administrator can configure master and slave (i.e., standby) machines
such that when users add/delete/modify their files on their user containers in the master machine, the updated
files are spontaneously synchronized on their user containers in the slave machine. The slave machine provides
failover for the master machine and inherits all user containers, including their IP addresses, whenever it
detects the master machine going down. Users can therefore continuously use their user container without
extended interruptions.
D. On-Memory and On-Filesystem Cache
A significant feature of the ICN architecture is that part of the ICN routers holds cache memory for
forwarded contents, unlike other existing routing protocols. To satisfy the requirements for ICN-friendliness,
the testbed must accommodate this new feature efficiently and realistically. However, equipping a container
with one big cache memory that will affect the resource usage of the testbed is naive.
To address this issue, CUTEi implements a content store manager (csmgr) that enables an on-filesystem
cachefor the CCNx forwarding daemon (ccnd) to locate caching data on a UNIX filesystem. The ccnd
running on CUTEi can select the cache type either from local memory (called on-memory cache) or filesystem
(i.e., on-filesystem cache) on the node. On-filesystem cache avoids the cache created by testbed users
occupying the memory of the nodes, which affects experimental performance. The on-filesystem cache system
accommodates two kinds of caches: individual cacheand shared cache.” Individual cache is accessible for
one dedicated router for the individual user, while shared cache is accessible for a set of routers in the same
group to avoid duplicated caching in the neighborhood for cooperative caching. The testbed users, therefore,
can make multiple CCN routers running on a common container and/or user container(s) share the cached
contents among these routers.
To implement the shared cache, csmgr provides two sub-functions, cache expiration control and cache
write control”. Cache expiration control enables csmgr instead of ccnd to expire (and discard) the cached
content according to its expiration time. It is necessary for shared cache since the expiration time of the content
stored by one router may be updated by another router and cannot be detected by any router. The cache write
control prevents routers writing duplicate content (chunk) into shared cache when the same content is already
stored.
In addition, csmgr has cache buffer controlthat temporarily keeps content in memory without direct write
and writes it to the on-filesystem cache after the temporary memory space is filled (or received content reaches
the end of content, or certain timeout). Cache buffer control reduces frequent disk I/O process calls in order to
improve the caching performance.
V. DEPLOYMENT AND EXPERIMENTATION
A. Testbed Deployment
We currently have the CUTEi testbed installed on the nine sites shown in Fig. 2. Some of these sites have
multiple CUTEi nodes installed and a total of 15 common containers forming the common space. Let us take a
machine specification in the NICT site as an example. The system components are as follows: CPU: Intel (R)
Xeon E5-2690 2.90 GHz, 64 GB RAM, and 4 TB HDD. Five CUTEi nodes (VMs) are launched on top of
VMware ESXi installed on this machine.
B. Experiments
1) On-Filesystem Cache
We compared the on-filesystem cache with the on-memory cache by first measuring the performances of
RAM, HDD, and SSD on our local PCs. The specifications for the PCs are Intel Core i5-3470T 2.9 GHz CPU,
3 MB L3 cache, 8 GB RAM (DDR3, 1600 MHz), and 1 TB HDD (7200 rpm), or, for those with SSD, 512 GB
SSD (19 nm NAND Flash), with the same OS and CCNx as installed on a CUTEi node. The sustainable
memory bandwidth measured via the STREAM [15] benchmark with the Copy function was found to be
7767.8 MB/s. Measuring the I/O speeds of the HDD and SSD using Linux’s dd command, we obtained average
read/write speeds for the HDD and SSD of 130/110 MB/s and 510/310 MB/s, respectively.
Next, we measured the performances of both caching mechanisms using the standard ccnd and modified
ccnd cooperating with csmgr on the above PCs. For the cache “read” performance of the caching mechanisms,
we measured the total time spent (1) receiving content requests (i.e., “interest” packets), (2) searching for the
content name and object in the cache hash table, (3) retrieving the matching content object from the cache, and
(4) inserting it into the send queue. For the cache “write” performance of the caching mechanisms, we
measured the total time spent (1) receiving content object, (2) searching for the content name and object in the
cache hash table, (3) writing the matching content object to the cache, and (4) inserting the matching content
object into the send queue. As seen in Fig. 3, the cache write performance is faster than the cache read
performance. This is because these cache implementations perform content object writing to the cache (i.e., (3)
in cache write) and packet forwarding (i.e., (4) in cache write) in parallel. In addition, while the standard ccnd
accesses the memory cache whenever it receives data (content objects), csmgr reduces the frequency of I/O
processes, and hence minimizes the differences in the performance of HDD and SSD with on-filesystem cache.
Further, as seen in Fig. 3(a), regardless of the device types, the read speeds of the on-memory cache and the
on-filesystem cache are virtually the same for content sizes smaller than 20 MB. For a content size of 30 MB,
the on-memory cache exhibits a time of approximately 1750 ms, while the time exhibited for the on-filesystem
cache is approximately 2300 ms. In CCN, however, this marginal performance reduction does not result in any
significant transmission delay, as will be seen in the next section. For the write speed (Fig. 3(b)), the
performance differences on different devices are virtually negligible.
2) ccncatchunks2 on PlanetLab and CUTEi
To examine the CCN characteristics in the CUTEi testbed, we conducted experiments in our user space. As
shown in Fig. 2, we selected five PlanetLab nodes and five CUTEi nodes collocated in the same domains
(although they might be in the different physical networks), and measured the data fetch performance using
ccncatchunks2 [4] on Path-A, Path-B, and Path-C. On these nodes, we installed CCNx 0.7.2 and ran the
standard CCNx components (with on-memory cache). ccncatchunks2 is a tool for retrieving content from the
origin or closest cache node. To get chunks using ccncatchunks2, the content origin (i.e., publisher) runs a pair
of ccnr (CCNx’s content repository daemon) and ccnputfile (putting 4 kB chunks into the repository). For each
test, we measured the average data fetch time for different sizes of data 1, 10, 20, and 30 MB – a total of 30
separate times on different days.
We investigated two scenarios for these three paths: First, the consumer (e.g., Waseda on Path-A) initially
retrieved the data using ccncatchunks2 from the publisher (e.g., Tsinghua on Path-A). Second, the consumer
flushed its cache and again invoked ccncatchunks2 (i.e., second fetch) to retrieve the same data, but this time,
the data came from the in-network cache (e.g., NICT on Path-A).
As can be seen by comparing Figs. 4 and 5, the data fetch of ccncatchunks2 for Path-A on PlanetLab
fluctuated moderately, while that on CUTEi was stable. In addition, ccncatchunks2 on PlanetLab took much
more time than it did on CUTEi. On the other hand, for Path-B, both results obtained indicate that it is
important to define the caching location in order to improve the data fetch performance in CCN, as an
in-network cache that is close to the publisher but far from the consumer will have virtually no effect (i.e.,
NICT node for Tsinghua node). Another interesting condition observed in both testbeds was that for Path-C,
the second fetch from the in-network cache node took a little longer than the first fetch from the origin. It is
necessary to investigate why ccncatchunks2 may not yet be optimized for a network with a longer Round-Trip
Time (between NICT node and INRIA node).
VI. CONCLUSIONS AND OUTLOOK
In this article, we described our large-scale ICN testbed called CUTEi that employs lightweight LXC
containers and facilitates experimentation on ICN protocols/applications. Experimental scenarios are
performed on CUTEi using its two experimentation modes: (1) an application-level experimentation mode
using a “common container, or (2) a network-level experimentation mode using a “user container.To
support a large number of users in CUTEi, an on-filesystem cache enabled by a content store manager (csmgr)
to make the CCNx forwarding daemon (ccnd) select either on-memory or on-filesystem cache is also
implemented. The on-filesystem cache can also make multiple CCN routers share the cached contents.
Thus far, we have installed nine CUTEi sites and conducted experiments on the testbed. The on-filesystem
cache we have developed operates efficiently and has no major performance drawbacks. By facilitating
experimentations on both PlanetLab and CUTEi, the results obtained indicate that CUTEi is stable and gives
reliable data sets. Furthermore, among the difficulties experienced while conducting our experiments on
PlanetLab was the fact that we could not use the same sites or nodes during experiments taking place over a
time period of a few days or more because they were not always stable or available. CUTEi addresses this
issue through its robustness and makes ICN experiments much easier.
VII. REFERENCES
[1] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard, “Networking Named Content,Proc.
ACM CoNEXT 2009, December 2009.
[2] D. Trossen, M. Sarela, and K. Sollins, “Arguments for an Information-Centric Internetworking Architecture,” ACM SIGCOMM
CCR, vol. 40, no. 2, April 2010, pp. 26-33.
[3] B. Ahlgren, C. Dannewitz, C. Imbrenda, D. Kutscher, and B. Ohlman, “A Survey of Information-Centric Networking,” IEEE
Commun. Mag., July 2012, pp. 27-36.
[4] CCNx, available at: http://www.ccnx.org/.
[5] LXC - Linux Containers, available at: http://linuxcontainers.org/.
[6] PlanetLab, available at: http://www.planet-lab.org/.
[7] C. Cabral, C. E. Rothenberg, and M. Magalhaes, “Mini-CCNx Fast Prototyping for Named Data Networking,” Proc. ACM
SIGCOMM ICN'13 Workshop, Hong Kong, August 2013.
[8] Named Data Networking, available at: http://named-data.net/.
[9] PURSUIT, available at: http://www.fp7-pursuit.eu/PursuitWeb/.
[10] Blackadder node implementation, available at: https://ithub.com/fp7-pursuit/blackadder.
[11] D. Farinacci, T. Li, S. Hanks, D. Meyer, and P. Traina, “Generic Routing Encapsulation (GRE),” IETF RFC 2784, March 2000.
[12] A. Quereilhac, M. Lacage, C. Freire, T. Turletti, and W. Dabbous, “NEPI: An integration framework for Network
Experimentation,” Proc. SoftCOM 2011, Dalmatia, September 2011.
[13] Universal TUN/TAP device driver, available at: https://www.kernel.org/doc/Documentation/networking/tuntap.txt.
[14] Cgroups, available at: https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt.
[15] J. D. McCalpin, “STREAM: Sustainable Memory Bandwidth in High Performance Computers,” available at:
http://www.cs.virginia.edu/stream/.
Fig. 1. Common space composed of common containers and user-dependent user space composed of user
containers.
Fig. 2. Potential CCN topology in CUTEi testbed
Fig. 3. Performance comparison with on-memory cache and on-filesystem cache (HDD and SSD)
Fig. 4. Content object download average time using ccncatchunks2 measured on PlanetLab
Fig. 5. Content object download average time using ccncatchunks2 measured on CUTEi
Biographies:
Hitoshi Asaeda is a planning manager at Network Research Headquarters, NICT, and a
principal investigator (PI) of the ICN project in NICT. He was with IBM Japan, Ltd.
from 1991 to 2001. From 2001 to 2004, he was a research engineer specialist at INRIA,
France. He was a project associate professor at the Graduate School of Media and
Governance, Keio University, where he worked from 2005 to 2012. He chairs the ICN
WG of Asia Future Internet Forum (AsiaFI). He holds a Ph.D. from Keio University.
Ruidong Li is a researcher at Network Architecture Laboratory, NICT. He received a
B.E. from Zhejiang University, China, in 2001. He received a master and doctorate of
engineering from the University of Tsukuba in 2005 and 2008, respectively. Since 2008,
he is a member of the AKARI architecture design project in NICT. His current research
interests include information-centric networking, Internet of things, security/secure
architectures of future networks, and regional platform network.
Nakjung Choi received his B.S. and Ph.D. degrees in computer science and engineering
from Seoul National University (SNU), Seoul, Korea, in 2002 and 2009, respectively.
From September 2009 to April 2010, he was a postdoctoral research fellow in the
Network Convergence and Security Laboratory, SNU. Since April 2010 through
January 2014, he is a member of technical staff at Bell Labs, Alcatel-Lucent, Korea.
Currently, he is leading Network System and Service (NSS) team in Network Algorithm,
Protocol, and Security (NAPS) research program.
... Many researchers have focused on the ability to efficiently and se- curely disseminate/retrieve content and have proposed various protocols and mechanisms to enhance ICN/CCN architectures and technical features. We have also studied the essential functions, such as name-based routing [1], [2], transport [3], [4], mobility [5], caching [6], [7], security [8], [9], testbed [10], and measurement [11], [12]. For protocol evaluations, ICN/CCN software implementations play a fundamental role in research in this area, and Community ICN (CICN) [15] and NDN Forwarding Daemon (NFD) [16] are the major implementations that have successfully pushed forward research and satisfied, in part, demand from the ICN/CCN research community in this initial phase. ...
... If they enable CS for cefnetd, they select the cache type either from local cache, on-memory cache, or on-filesystem cache. The on-filesystem cache has been introduced in detail for the implementation of the global ICN testbed called CUTEi [10], and we herein provide a simple introduction. On-filesystem cache avoids the memory occupation of routers. ...
Article
Full-text available
Information-Centric or Content-Centric Networking (ICN/CCN) is a promising novel network architecture that naturally integrates in-network caching, multicast, and multipath capabilities, without relying on centralized application-specific servers. Software platforms are vital for researching ICN/CCN; however, existing platforms lack a focus on extensibility and lightweight implementation. In this paper, we introduce a newly developed software platform enabling CCN, named Cefore. In brief, Cefore is lightweight, with the ability to run even on top of a resource-constrained device, but is also easily extensible with arbitrary plugin libraries or external software implementations. For large-scale experiments, a network emulator (Cefore-Emu) and network simulator (Cefore-Sim) have also been developed for this platform. Both Cefore-Emu and Cefore-Sim support hybrid experimental environments that incorporate physical networks into the emulated/simulated networks. In this paper, we describe the design, specification, and usage of Cefore as well as Cefore-Emu and Cefore-Sim. We show performance evaluations of in-network caching and streaming on Cefore-Emu and content fetching on Cefore-Sim, verifying the salient features of the Cefore software platform.
... Figure 5 shows the evaluation environment which we used in the evaluation. This network topology consists of three ICN routers at Japan and three ICN routers at Europe, and this is constructed on the global ICN testbed service named CUTEi [20]. A link between routers in Osaka and Rome is an overseas link and we assume this as an LFN link. ...
Article
A global content delivery plays an important role in the current Internet. Information-Centric Networking (ICN) is a future internet architecture which attempts to redesign the Internet with a focus on the content delivery. However, it has the potential performance degradation in the global content delivery. In this paper, we propose an ICN performance enhancing proxy (ICN-PEP) to mitigate this performance degradation. The key idea is to prefetch Data packets and to serve them to the consumer with the shorter round trip time. By utilizing ICN features, it can be developed as an offline and state-less proxy which has an advantage of scalability. We evaluate the performance of ICN-PEP in both simulation and experiment on global testbed and show that ICN-PEP improves the performance of global content delivery.
... In another effort, ICN2020 9 project presents in their deliverable D4.1 progress made in both local and federated testbeds. Especially, their federated testbed is planned to integrate the NDN testbed, the CUTEi testbed [42] and the GEANT testbed 10 to create an overlay deployment of ICN. The FP7 PUR-SUIT 11 deploys the PSIRP proposal (Section 1.4.2) in its testbed as an Layer 2 VPNbased overlay between several European, Us an Asian sites [43]. ...
Thesis
The current architecture of the Internet has been designed to connect remote hosts. But the evolution of its usage, which is now similar to that of a global platform for content distribution undermines its original communication model. In order to bring consistency between the Internet's architecture with its use, new content-oriented network architectures have been proposed, and these are now ready to be implemented. The issues of their management, deployment, and security now arise as locks essential to lift for Internet operators. In this thesis, we propose a security monitoring plan for Named Data Networking (NDN), the most advanced architecture which also benefits from a functional implementation. In this context, we have characterized the most important NDN attacks - Interest Flooding Attack (IFA) and Content Poisoning Attack (CPA) - under real deployment conditions. These results have led to the development of micro-detector-based attack detection solutions leveraging hypothesis testing theory. The approach allows the design of an optimal (AUMP) test capable of providing a desired Probability of False Alarms (PFA) by maximizing the detection power. We have integrated these micro-detectors into a security monitoring plan to detect abnormal changes and correlate them through a Bayesian network, which can identify events impacting security in an NDN node. This proposal has been validated by simulation and experimentation on IFA and CPA attacks.
Chapter
In this paper, we present a container-based emulation testbed, namely UiTiOt. The testbed integrated a well-known wireless emulation tool called QOMET to imitate the wireless network models over established wired network. Our testbed was developed based on a state-of-the-art technology called container-based virtualization. With our proposed design, we aim to provide researcher with the capability of running large-scale wireless/IoT experiments at affordable cost. Therefore, we did an insightful evaluation to ensure the feasibility and accuracy of the implementation of UiTiOt testbed. The evaluation includes several test-cases with different network topologies and routing protocols.
Conference Paper
Information-Centric Networking (ICN) has been proposed as an alternative to IP for future networks such as 5G. To speed up its development and adoption, researchers and engineers require testing tools that are both simple and scalable. In particular, it is crucial to be able to quickly deploy ICN-enabled network topologies in a flexible and efficient manner. In this demonstration, we showcase vICN (virtualized ICN), a platform that enables easy deployment, orchestration and management of ICN networks. vICN uses standard virtualization technologies such as Linux Containers (LXC) and is fully integrated with the CICN suite to enable flexible testing of ICN technologies on general-purpose hardware. Furthermore, it can perform live monitoring and modification of the network. In particular, we use vICN to deploy a simple topology that consists of 9 nodes. We show that vICN bootstraps the topology in about 60s on commodity hardware. We then demonstrate how vICN interacts with the virtualized network and how it can be used for easy experimentation.
Article
Full-text available
As a demand for small gas-discharge ion engines with a thrusting range between few Micronewtons and a few Millinewtons appeared, Giessen University started a RIT-scaling-down program, funded by DLR(1). Two µN-engines, µNRIT-4 and µNRIT-2, have been built in the workshop of the institute, a 2m 3 test facility has been developed and preliminary scaling laws have been established in order to predict the performance data of rf-thrusters with ionizer diameters less than 10cm [2]. Following preliminary tests of the microthrusters at Giessen [3] and a demonstration test campaign of the RIT-4 laboratory prototype at ESTEC/ Noordwijk, ESA/ ESTEC placed a GSTP-contract to ASTRIUM ST (with the University of Giessen as subcontractor) in the beginning of 2007. In the meantime, the performance of the µN-RIT-4 has been improved remarkably by new design and using new peripheral supply units.
Conference Paper
Full-text available
Experimental research in Information-Centric Networking (ICN) is crucial to the evaluation of new architectural proposals that bring named pieces of content as the main element of networks. This paper presents a new fast prototyping tool for the NDN (Named Data Networking) model, Mini-CCNx, that aims at filling an existing gap in generally available experimental platforms. Using container-based emulation and resource isolation techniques, Mini-CCNx appears as a flexible, scalable, high-fidelity, and low-cost tool that enables rich experimentation with hundreds of emulated NDN nodes in a commodity laptop.
Article
Full-text available
The information-centric networking (ICN) concept is a significant common approach of several future Internet research activities. The approach leverages in-network caching, multiparty communication through replication, and interaction models decoupling senders and receivers. The goal is to provide a network infrastructure service that is better suited to today¿s use (in particular. content distribution and mobility) and more resilient to disruptions and failures. The ICN approach is being explored by a number of research projects. We compare and discuss design choices and features of proposed ICN architectures, focusing on the following main components: named data objects, naming and security, API, routing and transport, and caching. We also discuss the advantages of the ICN approach in general.
Article
Full-text available
Many different experimentation environments address complementary aspects of network protocol evaluation, but because of their disparities and complexities it is often hard to use them to reproduce the same experiment scenario. NEPI, the Network Experimentation Programming Interface, was created to make evaluation of network protocols and applications easier and more reproducible using different experimentation environments, by providing a uniform object model for designing, deploying, and controlling experiments. In this paper we describe how we enhanced the design of NEPI and provided experiment validation, distributed experiment control, and failure recovery functionalities. We also validate the NEPI approach by implementing support for three complementary environments, a physical testbed, a network emulator, and a network simulator. Furthermore, we show with a concrete experiment use case, available online for reproduction, how easy it is with NEPI to integrate these environments for hybrid-experimentation. 1.
Article
Full-text available
Generic Routing Encapsulation (GRE) Status of this Memo This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This document specifies a protocol for performing encapsulation of an arbitrary network layer protocol over another arbitrary network layer protocol.
Article
Full-text available
Network use has evolved to be dominated by content distri- bution and retrieval, while networking technology still can only speak of connections between hosts. Accessing con- tent and services requires mapping from the what that users care about to the network's where. We present Content- Centric Networking (CCN) which takes content as a primi- tive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can si- multaneously achieve scalability, security and performance. We have implemented the basic features of our architecture and demonstrate resilience and performance with secure file downloads and VoIP calls.
Article
Full-text available
The current Internet architecture focuses on communicating entities, largely leaving aside the information to be exchanged among them. However, trends in communication scenarios show that WHAT is being exchanged becoming more important than WHO are exchanging information. Van Jacobson describes this as moving from interconnecting machines to interconnecting information. Any change of this part of the Internet needs argumentation as to why it should be undertaken in the first place. In this position paper, we identify four key challenges, namely information-centrism of applications, supporting and exposing tussles, increasing accountability, and addressing attention scarcity, that we believe an information-centric internetworking architecture could address better and would make changing such crucial part worthwhile. We recognize, however, that a much larger and more systematic debate for such change is needed, underlined by factual evidence on the gain for such change.
Technical Report
The STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth (in MB/s) and the corresponding computation rate for simple vector kernels. This publication entry refers to the continually update web site at http://www.cs.virginia.edu/stream/
Plasma Physics via Computer Simulation " (2005) ________________________________ * This work was supported in the framework of LOEWE-Schwerpunkt RITSAT
  • C K Birdsall
  • A B Langdon
C. K. Birdsall, A. B. Langdon: " Plasma Physics via Computer Simulation " (2005) ________________________________ * This work was supported in the framework of LOEWE-Schwerpunkt RITSAT