Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Service Function Chaining (SFC) paradigm improves network capabilities thanks to the support of application-driven-networking, which is realized through the invocation of an ordered set of Service Functions (SFs). The programmability and flexibility provided by emerging technologies such as Software-Defined Networking (SDN) and Network Function Virtualization (NFV) are perfect features for efficiently managing the lifecycle of SFCs. However, the limitation of Ternary Content Address Memories (TCAMs) of the SDN nodes in SDN-based SFC scenarios can lead to network performance degradation when the SFC classifier is not able to install new classification rules. To tackle the Dynamic Chain Request Classification Offloading (D-CRCO) problem presented in this paper, a hybrid eviction and split-and-distribute approach is proposed, where i) the dynamic behaviour of SFC requests is exploited by removing the corresponding idle rules from the flow tables if necessary; and ii) SFC classification is not forced to be carried out by the ingress node, but by any transit node in the domain. An Integer Linear Programming (ILP) formulation and an heuristic are provided to solve D-CRCO, with the goal of maximizing the number of SFC requests that can be served, respecting TCAM size, link capacity, and SF availability constraints. Simulation and emulation results over two real topologies show that the proposed solution is able to significantly increase the number of served SFC requests with a negligible impact on the network performance.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
With the growing popularity of immersive interaction applications, e.g., industrial teleoperation and remote-surgery, the service demand of communication network is switching from packet delivery to remote control-based communication. The Tactile Internet (TI) is a promising paradigm of remote control-based wireless communication service, which enables tactile users to perceive, manipulate, or control real and virtual objects in perceived real-time. To support TI, the ultra-reliable and low latency communication service is required. However, the multi-tactile to multi-teleoperator interactive property, in-network computing demand, ordered service function chaining (SFC) requirement, as well as other features of TI, challenge the ultra-low latency provisioning. This paper studies the joint wireless resource allocation and SFC routing and scheduling problem for end-to-end delay reduction. We first formulate the problem as an end-to-end delay minimization problem constrained to SFC and wireless resource. Then, a distributed and cooperative scheme, which consists of the min–max (MM) wireless resource allocation algorithm and the delay-aware scheduling (DAS) of SFC algorithm, thus called MM-DAS, is proposed to address the problem. In MM-DAS, we use the MM algorithm for uplink/downlink communication at wireless edges, while DAS is proposed to solve the virtual network function (VNF) mapping and scheduling at the wireless core, for achieving the goal of providing low end-to-end delay to tactile–teleoperator pairs. The simulation results have illustrated the efficiency of the proposal for low end-to-end delay performance provisioning in TI environments.
Article
Service chaining is attracting attention as a promising technology for providing a variety of network services by applying virtual network functions (VNFs) that can be instantiated on commercial off-the-shelf servers. The data transmission for each service chain has to satisfy the quality of service (QoS) requirements in terms of the loss probability and transmission delay, and hence the amount of resources for each VNF is expected to be sufficient for satisfying the QoS. However, the increase in the amount of VNF resources results in a high cost for improving the QoS. To reduce the cost of utilizing a VNF, sharing VNF instances through multiple service chains is an effective approach. However, the number of packets arriving at the VNF instance is increased, resulting in a degradation of the QoS. It is therefore important to select VNF instances shared by multiple service chains and to determine the amount of resources for the selected VNFs. In this paper, we propose a cost-effective service chain construction with a VNF sharing model. In the proposed method, each VNF is modeled as an M/M/1/K queueing model to evaluate the relationship between the amount of resources and the loss probability. The proposed method determines the VNF sharing, the VNF placement, the amount of resources for each VNF, and the transmission route of each service chain. For the optimization problem, these are applied according to our proposed heuristic algorithm. We evaluate the performance of the proposed method through a simulation. From the numerical examples, we show the effectiveness of the proposed method under certain network topologies.
Article
Full-text available
Cloud computing has been developed as a means to allocate resources efficiently while maintaining service-level agreements by providing on-demand resource allocation. As reactive strategies cause delays in the allocation of resources, proactive approaches that use predictions are necessary. However, due to high variance of cloud host load compared to that of grid computing, providing accurate predictions is still a challenge. Thus, in this paper we have proposed a prediction method based on Long Short-Term Memory Encoder–Decoder (LSTM-ED) to predict both mean load over consecutive intervals and actual load multi-step ahead. Our LSTM-ED-based approach improves the memory capability of LSTM, which is used in the recent previous work, by building an internal representation of time series data. In order to evaluate our approach, we have conducted experiments using a 1-month trace of a Google data centre with more than twelve thousand machines. Our experimental results show that while multi-layer LSTM causes overfitting and decrease in accuracy compared to single-layer LSTM, which was used in the previous work, our LSTM-ED-based approach successfully achieves higher accuracy than other previous models, including the recent LSTM one.
Article
Full-text available
Network function virtualization (NFV) enables flexible deployment of virtual network function (VNF) in 5G mobile communication network. Due to the inherent dynamics of network flows, fluctuated resources are required to embedding VNFs. VNF migration has become a critical issue because of the time-varying resource requirements. In this paper, we propose a real-time VNF migration algorithm based on the deep belief network (DBN) to predict future resource requirements, which resolves the problem of lacking effective prediction in the existing methods. Firstly, we propose optimizing bandwidth utilization and migration overhead simultaneously in VNF migration. Then, to model the resource utilization that evolves over time, we adopt online learning with the assistant of offline training in the prediction mechanism, and further introduce multi-task learning (MTL) in our deep architecture in order to improve the prediction accuracy. Moreover, we utilize adaptive learning rate to speed up the convergence speed of DBN. For the migration, we design a topology-aware greedy algorithm with the goal to optimize system cost by taking full advantage of the prediction result. In addition, based on tabu search, the proposed migration mechanism is further optimized. Simulation results show that the proposed scheme can achieve a good performance in reducing system cost and improving the service level agreements (SLA) of service.
Article
Full-text available
We propose an algorithm for the cloud and bandwidth resource allocation in Multi-Provider NFV environments. The resources are allocated so as to take into account the different costs charged by the cloud Infrastructure Providers (InP). The effectiveness of the proposed algorithm is confirmed from the comparison with the results of the optimal problem. Its application in medium and large networks has shown that it can lead to cost saving as high as 65% with respect to algorithms that allocate resources without taking into account the cost differences charged by the InPs.
Article
Full-text available
The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the single service components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers. Any service is represented by the Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF instances (VNFI) that in general are software components executed on Virtual Machines. In this paper we cope with the routing and resource dimensioning problem in NFV architectures. We formulate the optimization problem and due to its NP-hard complexity, heuristics are proposed for both cases of offline and online traffic demand. We show how the heuristics works correctly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth. A consolidation algorithm for the power consumption minimization is also proposed. The application of the consolidation algorithm allows for a high power consumption saving that however is to be paid with an increase in SFC blocking probability.
Article
Full-text available
Software-Defined Networking (SDN) abstracts low- level network functionalities to simplify network management and reduce costs. The OpenFlow protocol implements the SDN concept by abstracting network communications as flows to be processed by network elements. In OpenFlow, the high-level policies are translated into network primitives called rules that are distributed over the network. While the abstraction offered by OpenFlow allows to potentially implement any policy, it raises the new question of how to define the rules and where to place them in the network while respecting all technical and administrative requirements. In this paper, we propose a comprehensive study of the so-called OpenFlow rules placement problem with a survey of the various proposals intending to solve it. Our study is multi-fold. First, we define the problem and its challenges. Second, we overview the large number of solutions proposed, with a clear distinction between solutions focusing on memory management and those proposing to reduce signaling traffic to ensure scalability. Finally, we discuss potential research directions around the OpenFlow rules placement problem.
Article
Full-text available
Network Function Virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling Network Functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in Operating Expenses (OPEX) and Capital Expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of Software Defined Networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases and commercial products.
Article
Full-text available
Software-Defined Networking (SDN) enables fine-grained policies for firewalls, load balancers, routers, traffic monitoring, and other functionality. While Ternary Content Addressable Memory (TCAM) enables OpenFlow switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of rules. To realize the potential of SDN on this hardware, we need efficient ways to support the abstraction of a switch with arbitrarily large rule tables. To do so, we define a hardware-software hybrid switch design that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, we neither cache individual rules (to respect rule dependencies) nor compress rules (to preserve the per-rule traffic counts). Instead we ``splice'' long dependency chains to cache smaller groups of rules while preserving the semantics of the network policy. Our design satisfies four core criteria: (1) elasticity (combining the best of hardware and software switches), (2) transparency (faithfully supporting native OpenFlow semantics, including traffic counters), (3) fine-grained rule caching (placing popular rules in the TCAM, despite dependencies on less-popular rules), and (4) adaptability (to enable incremental changes to the rule caching as the policy changes).
Article
Network function virtualization (NFV) has emerged as a new technology to reduce the cost of hardware deployment. It is an architecture that using virtualized functions run on the virtual machine to achieve services instead of using specific hardware. Although NFV brings more opportunities to enhance the flexibility and efficiency of the network, resource allocation problems should be well taken into consideration. In this paper, we investigate the virtual network function (VNF) resource allocation problem to minimize the network operation cost for different services. Both setting the VNF instances for each virtual machine and allocating the traffic volume in the network are considered. The problem is formulated as a mixed integer programming problem. Although it can be solved in a centralized fashion which requires a central controller to collect information from all virtual machines, it is not practical for large-scale networks. Thus, we propose a distributed iteration algorithm to achieve the optimal solution. The proposed algorithm framework is developed based on the joint Benders decomposition and alternating direction method of multipliers (ADMM), which allows us to deal with integer variables and decompose the original problem into multiple subproblems for each virtual machine. Furthermore, we describe the detail implementation of our algorithm to run on a computer cluster using the Hadoop MapReduce software framework. Finally, the simulation results indicate the effectiveness of the algorithm.
Article
Network function virtualization (NFV) is referred to as the deployment of software functions running on commodity servers, instead of hardware middleboxes. It is an inevitable technology for agile service provisioning in next-generation telecommunication networks. A service is defined as a chain of software functions, named virtual network functions (VNFs), where each VNF can be placed on different host servers. The task of assigning the VNFs to the host servers is called service placement. A significant challenge in service placement is meeting the reliability requirement of a service. In the literature, the problem of service placement and providing the required reliability level are considered separately. First, the main server is selected, and then, the backup servers are deployed to meet the reliability requirement of the service. In this paper, we consolidate these two steps and perform them jointly and simultaneously. We consider a multi-infrastructure network provider (InP) environment where InPs offer general purpose commodity servers with different reliability levels. Then, we propose a programming problem for main and backup server selection jointly minimizing the cost of resources of the InPs and maximizing the reliability of the service. We reformulate this problem as a mixed integer convex programming (MICP) problem. Since MICPs are known to be NP-hard in general, we propose a polynomial time sub-optimal algorithm named Viterbi-based Reliable Service Placement (VRSP). Using numerical evaluations, we investigate the performance of the proposed algorithm compared to the optimal solution resulting from the MICP model and also with three heuristic algorithms.
Article
Service function chaining along with network function virtualization enable flexible and rapid provisioning of network services to meet increasing demand for short-lived services with diverse requirements. In this paradigm, the main question to be answered is how to deploy the requested services by means of creating virtual network function (VNF) instances and routing the traffic between them, according to the services specifications. In this paper, we define the energy aware service deployment problem, and present the ILP formulation of it by considering limited traffic processing capacity of VNF instances and management concerns. We apply the Benders decomposition technique to decompose the problem into two smaller problems: master and sub-problem. As it is NP-Hard to find a non-trivial solution to the ILP master problem, we resort to the relaxed LP version of the problem. Then, we design methods based on the feasibility pump and duality theorem to rapidly calculate a near-optimal integer solution. The extensive simulation results show even in a network with 24 switches and 40 servers, our algorithm can deploy 35 requests in less than 3 seconds while the total power consumption is only about 1.3 times of the optimal solution obtained by the exhaustive exact approach. Moreover, it significantly outperforms the prominent SFC deployment algorithms in the fat-tree topology.
Article
The introduction of Network Function Virtualization (NFV) leads to a new business model in which the Telecommunication Service Provider needs to rent cloud resources to Infrastructure Provider (InP) at prices as low as possible. Lowest prices can be achieved if the cloud resources can be rented in advance by allocating long-term Virtual Machines (VM). This is in contrast with the short-term VMs that are rented on demand and have higher costs. For this reason we propose a proactive solution in which the cloud resource rent is planned in advance based on a peak traffic knowledge. We illustrate the problem of determining the cloud resources in Cloud Infrastructures managed by different InPs and so as to minimize the sum of cloud resource, bandwidth and deployment costs. We formulate an Integer Linear Problem (ILP) and due to its complexity we introduce an efficient heuristic approach allowing for a remarkable computational complexity reduction. We compare our solution to a reactive solution in which the cloud resources are rented on demand and dimensioned according to the current traffic. Though the proposed proactive solution needs more cloud and bandwidth resources due to its peak allocation, its total resources cost may be lower than the one achieved when a reactive solution is applied. That is a consequence of the higher cost of short-term VMs. For instance when a reactive solution is applied with traffic variation times of ten minutes, our proactive solution allows for lower total costs when the long-term VM rent is lower than the short-term VM one by 33%.
Article
Packet classification is the key mechanism for enabling many networking and security services. Ternary content addressable memory (TCAM) has been the industrial standard for implementing high-speed packet classification because of its constant classification time. However, TCAM chips have small capacity, high power consumption, high heat generation, and large area-size. This paper focuses on the TCAM-based classifier compression problem: given a classifier C, we want to construct the smallest possible list of TCAM entries T that implement C. In this paper, we propose the ternary unification framework (TUF) for this compression problem and three concrete compression algorithms within this framework. The framework allows us to find more optimization opportunities and design new TCAM-based classifier compression algorithms. Our experimental results show that the TUF can speed up the prior algorithm TCAM Razor by 20 times or more and leads to new algorithms that improve compression performance over prior algorithms by an average of 13.7% on our largest real-life classifiers. The experimental results show that our algorithms can improve both the runtime and the compression ratio over prior work.
Article
Software-Defined Networking (SDN) and OpenFlow are actively being standardized and deployed. These deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful and safe deployments. We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a stream of rule updates, while relying on both observing the control plane view as reported by the switch and probing the data plane state to determine switch characteristics by comparing these views. We measure, report and explain the performance characteristics of flow table updates in six hardware OpenFlow switches. Our results describing rule update rates can help SDN designers make their controllers efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.
Article
In this paper we consider carrier networks using only OpenFlow switches instead of IP routers. Accommodating the full forwarding information base (FIB) of IP routers in the switches is difficult because the BGP routing tables in the default-free zone currently contain about 500,000 entries and switches have only little capacity in their fast and expensive TCAM memory. The objective of this paper is the compression of the FIB in acceptable time to minimize the TCAM requirements of switches. The benchmark is simple prefix aggregation as it is common in IP networks where longest-prefix matching is applied. In contrast, OpenFlow-based switches can match general wildcard expressions with priorities. Starting from a minimum-size prefix-based FIB, we further compress that FIB by allowing general wildcard expressions utilizing the Espresso heuristic that is commonly used for logic minimization. As the computation time of Espresso is challenging for large inputs, we provide means to trade computation time against compression efficiency. Our results show that today's FIB sizes can be reduced by 17% saving up to 40,000 entries and the compression time can be limited to 1 - 2s sacrificing only 1% - 2% compression ratio.
Conference Paper
Software Defined Network (SDN) is undergoing an increasingly high popularity among various schemes of networking deployment. The main advantage of SDN lies in its flexibility in controlling network traffic, which arises from decoupling the control plane and the data plane of network devices. However, despite of the valuable merits of SDN, the outage of data forwarding should not be ignored. The drawback mainly results from the length limitation of switch flow tables, the congestion between controllers and switches, and the deficiency of the controller capacity, which are all closely related to the timeout mechanism of flow table. In our study, the deficiency of SDN system will be analyzed explicitly. Moreover, we propose an adaptive control mechanism aiming at optimizing the idle timeout reset of flow tables and cooperation between controllers and switches. Our mechanism achieves higher matching ratio of flow tables as well as alleviates the congestion in the control channel, enabling more fluent data forwarding in SDN.
Conference Paper
High performance switches employ extremely low latency memory subsystems in an effort to reap the lowest feasible end-to-end flow level latencies. Their capacities are extremely valuable as the size of these memories is limited due to several architectural constraints such as power and silicon area. This necessity is further exacerbated with the emergence of Software Defined Networks (SDN) where fine-grained flow definitions lead to explosion in the number of flow entries. In this paper, we propose FlowMaster, a speculative mechanism to update the flow table by predicting when an entry becomes stale and evict the same early to accommodate new entries. We collage the observations from predictors into a Markov based learning predictor that predicts whether a flow is valuable any more. Our experiments confirm that FlowMaster enables efficient usage of flow tables thereby reducing the discard rate from flow table by orders of magnitude and in some cases, eliminating discards completely.
Article
Software Defined Networks (SDN) such as OpenFlow provides better network management for data center by decoupling control plane from data plane. Current OpenFlow controllers install flow rules with a fixed timeout after which the switch automatically removes the rules from its flow table. However, this fixed timeout has shown many disadvantages. For flows with short packet interval, the timeout may be too large so that flow rules stay in the flow table for too long time and result in unnecessary occupation of flow table; for flows with long packet interval or periodic flows, the timeout may be too short, hence producing too many packet-in events and causing overload on the controller. In this paper, we propose the Intelligent Timeout Master, which can assign suitable timeout to different flows according to their characteristics, as well as conduct a feedback control to adjust the max timeout value according to the current flow table occupation, in an effort to avoid flow table overflow. In our experiments, we use real traffic trace and the result confirms that our Intelligent Timeout Master performs quite well in reducing the number of packet-in events as well as flow table occupation.
Conference Paper
In this paper, we advocate addressing the communication overhead problem between OpenFlow controllers and OpenFlow switches due to table-miss in a flow table. It may cause the communication overhead between controllers and switches because a switch has to send packet-in message to a controller for processing table-missed flows. We propose a simple flow entry management scheme for reducing the controller overhead by increasing the flow entry matching ratio. By using an LRU caching algorithm, a switch can keep the flow entries in a flow table as many as possible, and then the flow entry matching ratio can be increased.
Article
OpenFlow networks require installation of flow rules in a limited capacity switch memory (Ternary Content Addressable Memory or TCAMs, in particular) from a logically centralized controller. A controller can manage the switch memory in an OpenFlow network through events that are generated by the switch at discrete time intervals. Recent studies have shown that data centers can have up to 10,000 network flows per second per server rack today. Increasing the TCAM size to accommodate these large number of flow rules is not a viable solution since TCAM is costly and power hungry. Current OpenFlow controllers handle this issue by installing flow rules with a default idle timeout after which the switch automatically evicts the rule from its TCAM. This results in inefficient usage of switch memory for short lived flows when the timeout is too high and in increased controller workload for frequent flows when the timeout is too low. In this context, we present SmartTime - an OpenFlow controller system that combines an adaptive timeout heuristic to compute efficient idle timeouts with proactive eviction of flow rules, which results in effective utilization of TCAM space while ensuring that TCAM misses (or controller load) does not increase. To the best of our knowledge, SmartTime is the first real implementation of an intelligent flow management strategy in an OpenFlow controller that can be deployed in current OpenFlow networks. In our experiments using multiple real data center packet traces and cache sizes, SmartTime adaptive policy consistently outperformed the best performing static idle timeout policy or random eviction policy by up to 58% in terms of total cost.
  • R Enns
  • M Björklund
  • A Bierman
  • J Schönwälder
R. Enns, M. Björklund, A. Bierman, J. Schönwälder, Network Configuration Protocol (NETCONF), in: Request for Comments, RFC Editor, 2011, http://dx.doi. org/10.17487/RFC6241, RFC 6241, URL https://rfc-editor.org/rfc/rfc6241.txt.
Devoflow: Scaling flow management for high-performance networks
  • A R Curtis
  • J C Mogul
  • J Tourrilhes
  • P Yalagandula
  • P Sharma
  • S Banerjee
A.R. Curtis, J.C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, S. Banerjee, Devoflow: Scaling flow management for high-performance networks, SIGCOMM Comput. Commun. Rev. 41 (4) (2011) 254-265.
Scalable flow-based networking with difane
  • M Yu
  • J Rexford
  • M J Freedman
  • J Wang
M. Yu, J. Rexford, M.J. Freedman, J. Wang, Scalable flow-based networking with difane, ACM SIGCOMM Comput. Commun. Rev. 40 (4) (2010) 351-362.
Machine learning-based method for prediction of virtual network function resource demands
  • S Schneider
  • N P Satheeschandrany
  • M Peuster
  • H Karl
S. Schneider, N.P. Satheeschandrany, M. Peuster, H. Karl, Machine learning-based method for prediction of virtual network function resource demands, in: 2020 IEEE Conference on Network Softwarization (NetSoft), 2020, pp. 1-6.
Service Function Chaining (SFC) Architecture, in: Request for Comments
  • J M Halpern
  • C Pignataro
J.M. Halpern, C. Pignataro, Service Function Chaining (SFC) Architecture, in: Request for Comments, RFC Editor, 2015, RFC 7665.
  • P Quinn
  • U Elzur
  • C Pignataro
P. Quinn, U. Elzur, C. Pignataro, Network Service Header (NSH), in: Request for Comments, RFC Editor, 2018, RFC 8300.
An instant virtual network on your laptop (or other PC)
  • Mininet
Mininet, An instant virtual network on your laptop (or other PC), 2020, [Online]: http://mininet.org/.
Devoflow: Scaling flow management for high-performance networks
  • Curtis
Scalable flow-based networking with difane
  • Yu
Machine learning-based method for prediction of virtual network function resource demands
  • Schneider