Figure 1 - uploaded by Lorenzo Bracciale
Content may be subject to copyright.
ICN forwarding engine model and packets

ICN forwarding engine model and packets

Source publication
Article
Full-text available
To support content-oriented services, the routers in Information Centric Networks (ICN) have to provide packet processing functions that are more complex with respect to IP standards, making harder to attain high forwarding rates. The ICN community is working to overcome this issue, designing new software and hardware routers, for setting higher th...

Contexts in source publication

Context 1
... node uses the forwarding engine shown in figure 1 and is connected 65 to other nodes through channels, called faces, which can be based on different transport technologies such as Ethernet, TCP/IP sockets, etc. ...
Context 2
... download a named object, a consumer issues an Interest packet, which contains the object name, and that is forwarded towards the producer. The 70 forwarding process is called routing-by-name since the output face is selected through a name-based prefix matching based on a Forwarding Information Base (FIB) containing name prefixes (such as video/foo and a in figure 1) and the corresponding output faces (or upstream faces). The FIB is usually configured by routing protocols, which advertise name prefixes rather than IP subnetworks 75 [7]. ...
Context 3
... due to the consumer flow control mechanism, in the uniform case IR1 and IR2 act as bottlenecks and IR3 is underutilized. On the contrary, as showed in figure 10, introducing a proper non-uniform traffic dis- ...
Context 4
... what concerns multicast, we consider the the scenario depicted in figure 5(b), in which 5 consumers concurrently download the same content provided by a single producer. In figure 11 we show the throughput measured on con- sumers, producer and internal-routers (towards consumers). As we can see, the producer traffic is equal to the one of a single consumer confirming that the 425 CSR preserves the ICN multicast functionality. ...
Context 5
... the test starts, the caches of the internal-nodes are empty. In figure 12 we show how the throughput varies on the consumer and on the pro- 435 ducer versus time. As we can see, content is transmitted only one time from the producer, since the second request is completely served by the caches of internal-routers, thus demonstrating that the CSR preserves caching func- tionality. ...
Context 6
... internal-nodes are empty. In figure 12 we show how the throughput varies on the consumer and on the pro- 435 ducer versus time. As we can see, content is transmitted only one time from the producer, since the second request is completely served by the caches of internal-routers, thus demonstrating that the CSR preserves caching func- tionality. Fig. 13 reports the cache size of all internal-routers. During the ...
Context 7
... is disabled on all nodes. Figure 14 shows the aggregated throughput as seen by all the consumers. In the left-hand part of the plot we perform a scale-out operation by varying the 460 number of internal-routers from one to three, making a change every 20 seconds. ...

Similar publications

Article
Full-text available
In-network caching as a core function in ICN can greatly improve the content distribution efficiency and reduce redundant network traffic. It is a challenge to design caching strategies in ICN to improve the performance of in-network caching. Traditional lightweight strategies are inefficient due to problems such as edge caching pollution and slow...

Citations

... NDN routers have the ability to cache data while ensuring adequate bandwidth to meet the users' demands. This makes the technology highly efficient and reliable in managing and delivering content to its users [16]. The NDN router is Named-based communication relies heavily on naming as it directly influences routing, forwarding, and caching mechanisms. ...
Article
Full-text available
Internet of things (IoT) has emerged as a quintessential paradigm of communication systems. Current literature introduces notion of a named data network for IoT (NDN-IoT), optimizing IoT communication by employing name-based networking. However, the advancements introduced by this approach are inadequate when dealing with URL-based naming and forwarding. For instance, length and ambiguities in content names are still open challenges. In addition, the intelligent exploration of content names to discern a forwarding clue is a significant research gap. To achieve intelligent communication, understanding the interest name and acquiring a forwarding clue is crucial. Focusing on this gap, an intelligent naming scheme called INF-NDN IoT is proposed that correlates with a forwarding mechanism as well. The proposed INF-NDN IoT improves the NDN naming schemas by utilizing natural language processing (NLP) techniques and selecting supernodes and ordinary nodes in the network. INF-NDN IoT assigns (forwarding clue) semantic tags to content names as well as to supernodes that in turn perform the semantic forwarding. Experimental results have shown that INF-NDN IoT outperformed existing work, and has better results in terms of name length, name memory utilization, interest satisfaction rate, retrieval time, hop count, and energy consumption.
... NDN is impacted by a scaling issue with routing table size. Additionally, when providers relocate to a new location, the name system presents major scalability issues [14]. ...
Article
Full-text available
The internet’s future architecture, known as Named Data Networking (NDN), is a creative way to offer content-based services. NDN is more appropriate for content distribution because of its special characteristics, such as naming conventions for packets and methods for in-network caching. Mobility is one of the main study areas for this innovative internet architecture. The software-defined networking (SDN) method, which is employed to provide mobility management in NDN, is one of the feasible strategies. Decoupling the network control plane from the data plane creates an improved programmable platform and makes it possible for outside applications to specify how a network behaves. The SDN is a straightforward and scalable network due to its key characteristics, including programmability, flexibility, and decentralized control. To address the problem of consumer mobility, we proposed an efficient SDPCACM (software-defined proactive caching architecture for consumer mobility) in NDN that extends the SDN model to allow mobility control for the NDN architecture (NDNA), through which the MC (mobile consumer) receives the data proactively after handover while the MC is moving. When an MC is watching a real-time video in a state of mobility and changing their position from one attachment point to another, the controllers in the SDN preserve the network layout and topology as well as link metrics to transfer updated routes with the occurrence of the handoff or handover scenario, and through the proactive caching mechanism, the previous access router proactively sends the desired packets to the new connected routers. Furthermore, the intra-domain and inter-domain handover processing situations in the SDPCACM for NDNA are described here in detail. Moreover, we conduct a simulation of the proposed SDPCACM for NDN that offers an illustrative methodology and parameter configuration for virtual machines (VMs), OpenFlow switches, and an ODL controller. The simulation result demonstrates that the proposed scheme has significant improvements in terms of CPU usage, reduced delay time, jitter, throughput, and packet loss ratio.
... NDN is identified to meet the requirements of newly developed Internet applications [23]. NDN routers cache data with satisfied bandwidth [24]. NDN employs a hop-by-hop transmission method as an alternative to the usual paradigm. ...
Article
Full-text available
Named Data Networking (NDN) is developed to accommodate future internet traffic. In recent years, NDN’s popularity has grown due to the evolution of the Internet of Things, Artificial Intelligence, Cloud Services, and Blockchain. As part of Future Internet Architecture, the NDN architecture enables dynamic content management, mobility, privacy, and trustworthiness. However, there has yet to be a survey manuscript exploring NDN in the context of these contemporary technologies and applications. Hence this manuscript comprehensively highlights these motivations and the promises. Overall, Blockchain and 5G technology have dominated the trend with 73% of research interest, while transportation applications dominate 64% of the research interest. These significant insights show the emergence of the NDN and the potential to help the revolution of future internet traffic.
... However, the scheme does not consider the content retrieval pattern during content placement, which results in poor utilization of caching resources as rarely accessed contents are also cached by the onpath routers. The scheme mentioned in [53] designed a caching strategy based on the Cluster-based Scalable Router (CSR). Each CSR consists of several routers that are externally seen as a single router and internally deployed a load-balancing strategy among them. ...
... Moreover, caching in intermediate routers helps in delivering contents to consumers quickly. [7][8][9] This improves the network utilization and end-to-end delay but imposes numerous challenges in terms of system design and deployment. Due to limited storage in intermediate routers, it is not possible to store each and every chunk of data. ...
Article
Full-text available
Information‐centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. This novel networking paradigm makes content retrieval faster and less expensive by shifting data provisioning into content holder rather than content owner. Caching is the feature of ICN that makes content serving possible from any intermediate device. An efficient caching is one of the primary requirements for effective deployment of ICN. In this paper, a caching approach with balanced content distribution among network devices is proposed. The selection of contents to be cached is determined through universal and computed using Zipf's law. The dynamic change in popularity of contents is also considered to take make caching decisions. For balancing the cached content across the network, every router keeps track of its neighbor's cache status. Three parameters, the proportionate distance of the router from the client (pd), the router congestion (rc), and the cache status (cs), are contemplated to select a router for caching contents. The new caching approach is evaluated in the simulated environment using ndnSIM‐2.0. Three state‐of‐the‐art approaches, Leave Copy Everywhere (LCE), centrality measures‐based algorithm (CMBA), and a probability‐based caching (probCache), are considered for comparison. The proposed method of caching shows the better performance compared to the other three protocols used in the comparison. Information‐centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. The selection of contents to be cached is determined through universal and computed using Zipf's law. For balancing the cached content across the network, every router keeps track of its neighbor's cache status.
... As a potential future architecture, the ICN opens up ample of opportunities to research as none of the protocols are matured enough. Out of many such areas of research, naming (Mangili, Martignon, and Capone 2015), caching (Abdullahi, Arif, and Hassan 2015;Mick, Tourani, and Misra 2016;Nguyen et al. 2019), routing (Banerjee, Kulkarni, and Seetharam 2018;Coulom 2007;Detti et al. 2018;Modesto and Boukerche 2018;Wang et al. 2012) and security (Goleman, Boyatzis, andMckee 2019, 2019) are few of the topics that is drawing attention of many researchers. The work discussed in this paper is focused on ICN routing in general and forwarding Interest packets in particular. ...
Article
Full-text available
A consumer in Information Centric Network (ICN) generates an Interest packet by specifying the name of the required content. As the network emphasizes on content retrieval without much bothering about who serves it (a cache location or actual producer), every Content Router (CR) either provides the requested content back to the requester (if exists in its cache) or forwards the Interest packet to the nearest CR. While forwarding an Interest packet, the ICN routing by default does not provide any mechanism to predict the probable location of the content searched. However, having a predictive model before forwarding may significantly improve content retrieval performance. In this paper, a machine learning (ML) algorithm, particularly a Support Vector Machine (SVM) is used to forecast the success of the Interest packet. A CR can then send an Interest packet in the outgoing interface which is forecasted successful. The objective is to maximize the success rate which in turn minimizes content search time and maximizes throughput. The dataset used in is generated from a simulation topology designed in ndnSim comprising 10 K data points having 10 features. The linear, RBF and the polynomial kernel (with degree 3) are used to analyze the dataset. The polynomial kernel shows the best behavior with 98% accuracy. A comparative retrieval time with and without ML demonstrates around 10% improvement with SVM enable forwarding compared to normal ICN forwarding.
... To alleviate the load im-balancing issues and reduction in excessive caching operations, several cluster-based caching schemes are also proposed in the CCN [35][36][37][38]. The Hierarchical Cluster-based Caching (HCC) scheme [35] partitioned the network routers into the core routers and the edge routers. ...
... The scheme performs caching operations using the partition information and the content popularity in the network. A cluster-based scalable scheme is suggested in [38] that combines the physical routers together and these routers are seen as a single unit to the outside nodes. However, internally, the traffic load has been distributed among the physical routers. ...
Article
Full-text available
Content-Centric Networking (CCN) has emerged as a potential Internet architecture that supports name-based content retrieval mechanism in contrast to the current host location-oriented IP architecture. The in-network caching capability of CCN ensures higher content availability, lesser network delay, and leads to server load reduction. It was observed that caching the contents on each intermediate node does not use the network resources efficiently. Hence, efficient content caching decisions are crucial to improve the Quality-of-Service (QoS) for the end-user devices and improved network performance. Towards this, a novel content caching scheme is proposed in this paper. The proposed scheme first clusters the network nodes based on the hop count and bandwidth parameters to reduce content redundancy and caching operations. Then, the scheme takes content placement decisions using the cluster information, content popularity, and the hop count parameters, where the caching probability improves as the content traversed toward the requester. Hence, using the proposed heuristics, the popular contents are placed near the edges of the network to achieve a high cache hit ratio. Once the cache becomes full, the scheme implements Least-Frequently-Used (LFU) replacement scheme to substitute the least accessed content in the network routers. Extensive simulations are conducted and the performance of the proposed scheme is investigated under different network parameters that demonstrate the superiority of the proposed strategy w.r.t the peer competing strategies.
... To reduce content redundancy and network traffic, several network partitioning-based collaborative caching schemes are proposed in various research papers such as Yan et al. (2017) and Detti et al. (2018). A two-level network planning (Ma et al. 2014) has been recommended in Ma et al. (2017) for load balancing. ...
... The partitioning scheme discussed in Detti et al. (2018), provides a load balancing mechanism within the partitions while the partitions are visible as a single ICN router to external partitions. In Hasan and Jeong (2018), the authors suggest performing network partitioning based on the number of hops among routers where each partition contains those devices that are one-hop away from each other. ...
Article
Full-text available
Internet of Things (IoT) has emerged as a novel paradigm that focuses on connecting a large number of devices with the Internet infrastructure. To address the performance requirements of IoT devices, the Content-Centric Networking (CCN) becomes an encouraging future Internet architecture that emphasizes name-based content access, instead of searching for the host-location in the network. The in-network content caching is an essential characteristic for rapid information dissemination and efficient content delivery in the CCN. To this end, a novel content caching scheme has been proposed for comprehensive utilization of the available caching resources. The proposed scheme partitions the CCN-enabled IoT networks hierarchically to reduce content redundancy and excessive cache replacement operations. For content caching decisions, the proposed caching strategy considers normalized distance-based metrics along with dynamic threshold heuristics to improve content retrieval delay by placing the contents near the IoT devices. Extensive simulation analysis on realistic network configurations demonstrates that the proposed caching scheme outperforms the existing competing content placement strategies on performance parameters such as network cache hit-ratio, hop-count, delay, and average network traffic. Thus, the proposed caching scheme becomes more promising for CCN based IoT applications.
... The first phase is the interest routing to forward interest request to content provider with the help of CS, PIT and FIB; The second phase is the data routing to distribute the requested content to interest requester with the help of CS and PIT. For the first phase interest routing, the previous strategies 6,7 are usually based on the heuristic interest forwarding by finding the relatively optimal outgoing interface, which is very difficult to guarantee the routing efficiency and the stability. Therefore, this paper uses Reinforcement Learning (RL) and Neural Network (NN) to optimize the first phase interest routing. ...
... The proposed RLNN is implemented over NDNSIM which is a professional simulator for ICN. In order to verify the feasibility and efficiency of RLNN, reference 6 and reference 13 are selected as the baselines which are shorten for ACS and ACOIR respectively, where routing success rate and running time are used as two performance evaluation metrics. For 10 groups of different interest requests from 100 to 1000, the experimental results on routing success rate and running time are shown in Figures 1 and 2 respectively. ...
Article
Information‐Centric Networking (ICN) provides network infrastructure services to distribute and retrieve the content in more efficient way, in which the content is the only abstract entity with the arbitrary form. In particular, ICN routing plays an important role to support interest request and content distribution. Different from the previous routing proposals, this paper presents a Reinforcement Learning (RL) and Neural Network (NN) based ICN routing strategy to improve the distribution efficiency and the stability. Meanwhile, RL is used to obtain the stable routing, while NN is used to predict network delay and enhance the routing efficiency. The simulation experiments are driven based on NDNSIM, and the results show that the proposed ICN routing strategy is feasible and efficient.
... Thus, for saving energy in an elastic cluster, only partial nodes keep working, and the others are maintained at a low-power state. In general, a traditional elastic cluster system [10]- [12] can be considered as a two-layer structure, where the first layer is a manager (or several managers for large-scale systems), and the second layer is working nodes. Because the manager and working nodes can maintain their respective queues to accumulate the requests which form system workloads, the requests are generally queued and served in the mode of N-N service queues (or called N-N queues for short). ...
Article
Full-text available
The clusters in a blockchain computing system can be constructed to be elastic, thus supporting scalable computing and improving energy efficiency. To form an elastic cluster, the service nodes are dynamically divided into the working nodes and the reserved nodes. Specifically, the working nodes are active to meet the computing requirements of workloads, while the reserved nodes are switched to a low-power state for energy saving. Traditionally, workloads are distributed to working nodes in the mode of N-N service queues. But in this mode, the Quality of Service (QoS) of different working nodes may be diverse, because the requirements are various for the accumulated requests in different working nodes. As a result, the overall system capability is not sufficiently utilized, and the overall system QoS is dragged down. In this paper, we propose an N-1 queueing and on-demand resource provisioning method to process workloads in the mode of N-1 service queues. Different from N-N service queues, N-1 service queues prohibit the accumulation of requests in working nodes. Thereby, once there are idle working nodes, waiting requests can immediately be delivered to them. As a result, all the working nodes are sufficiently utilized, and the overall QoS is improved. Accordingly, after using the N-1 service queues, fewer working nodes are enough to meet the same Service Level Agreement (SLA) on same workloads. In addition, by using a resource demand monitor module, our method dynamically readjusts the number of working nodes to match workload demand. Finally, the energy efficiency of an elastic cluster can be measurably improved, due to that fewer working nodes are powered on while the same SLA can be met.