Conference Paper

Dynamic Hardware-Acceleration of VNFs in NFV Environments

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A utilização desse componenteé tipicamente isolada e orientada a aplicações específicas, gerando um conjunto de implementações heterogêneas, e por vezes conflitantes, do mesmo. O uso específico do EMS ocorre, por exemplo, na alocação de recursos em redes 5G [Hawilo et al. 2014], implementação de sistemas multimídias em NFV [Carella et al. 2014] e aceleração de funções de rede [Sharma et al. 2019]. Além disso, soluções de EMS não são desenvolvidas considerando uma arquitetura de referência. ...
Conference Paper
Full-text available
The development of efficient management solutions is a fundamental step towards the broad adoption of Network Function Virtualization (NFV) technology. The Element Management System (EMS) is a key component of the NFV reference management architecture defined by the European Telecommunications Standards Institute-ETSI. The EMS actuates directly on the Virtua-lized Network Functions (VNF), executing operations issued by the VNF Manager (VNFM) and Operation and Business Support Systems (OSS/BSS). In recent works however, the EMS has been either (i) completely ignored or (ii) employed in limited settings, typically for executing simple operations in the context of a particular application. In this work, we propose a comprehensive EMS architecture that is compliant with the NFV reference architecture. This architecture enables the creation of a holistic EMS to manage VNF instances while abstracting communication protocols and technologies, allowing the interoperability among different VNF platforms and OSS/BSS. A prototype was implemented, and evaluation results are presented for the overhead of including the EMS in the execution of typical management operations.
Article
In recent years, telecom operators have been migrating towards network architectures based on Network Function Virtualization in order to reduce their high Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). However, virtualization of some network functions is accompanied by a significant degradation of Virtual Network Function (VNF) performance in terms of their throughput or energy consumption. To address these challenges, use of hardware-accelerators, e.g. FPGAs, GPUs, to offload CPU-intensive operations from performance-critical VNFs has been proposed. Allocation of NFV infrastructure (NFVi) resources for VNF placement and chaining (VNF-PC) has been a major area of research recently. A variety of resources allocation models have been proposed to achieve various operator’s objectives i.e. minimizing CAPEX, OPEX, latency, etc. However, the VNF-PC resource allocation problem for the case when NFVi incorporates hardware-accelerators remains unaddressed. Ignoring hardware-accelerators in NFVi while performing resource allocation for VNF-chains can nullify the advantages resulting from the use of hardware-accelerators. Therefore, accurate models and techniques for the accelerator-aware VNF-PC (VNF-AAPC) are needed in order to achieve the overall efficient utilization of all NFVi resources including hardware-accelerators. This paper investigates the problem of VNF-AAPC, i.e., how to allocate usual NFVi resources along-with hardware-accelerators to VNF-chains in a cost-efficient manner. Particularly, we propose two methods to tackle the VNF-AAPC problem. The first approach is based on Integer Linear Programming (ILP) which jointly optimizes VNF placement, chaining and accelerator allocation while concurring to all NFVi constraints. The second approach is a heuristic-based method that addresses the scalability issue of the ILP approach. The heuristic addresses the VNF-AAPC problem by following a two-step algorithm. The experimental evaluations indicate that incorporating accelerator-awareness in VNF-PC strategies can help operators to achieve additional cost-savings from the efficient allocation of hardware-accelerator resources.
Article
Full-text available
Network Function Virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling Network Functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in Operating Expenses (OPEX) and Capital Expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of Software Defined Networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases and commercial products.
Article
Full-text available
Network Function Virtualization (NFV) refers to the use of commodity hardware resources as the basic platform to perform specialized network functions as opposed to specialized hardware devices. Currently, NFV is mainly implemented based on general purpose processors, or general purpose network processors. In this paper we propose the use of FPGAs as an ideal platform for NFV that can be used to provide both the flexibility of virtualizations and the high performance of the specialized hardware. We present the early attempts of using FPGAs dynamic reconfiguration in network processing applications to provide flexible network functions and we present the opportunities for an FPGA-based NFV platform.
Conference Paper
Full-text available
In data communication via internet, security is becoming one of the most influential aspects. One way to support it is by classifying and filtering ethernet packets within network devices. Packet classification is a fundamental task for network devices such as routers, firewalls, and intrusion detection systems. In this paper we present architecture of fast and reconfigurable Packet Classification Engine (PCE). This engine is used in FPGA-based firewall. Our PCE inspects multi-dimensional field of packet header sequentially based on tree-based algorithm. This algorithm simplifies overall system to a lower scale and leads to a more secure system. The PCE works with an adaptation of single cycle processor architecture in the system. Ethernet packet is examined with PCE based on Source IP Address, Destination IP Address, Source Port, Destination Port, and Protocol fields of the packet header. These are basic fields to know whether it is a dangerous or normal packet before inspecting the content. Using implementation of tree-based algorithm in the architecture, firewall rules are rebuilt into 24- bit sub-rules which are read as processor instruction in the inspection process. The inspection process is comparing one sub- rule with input field of header every clock cycle. The proposed PCE shows 91 MHz clock frequency in Cyclone II EP2C70F896C6 with 13 clocks throughput average from input to output generation. The use of tree-based algorithm simplifies the multidimensional packet inspection and gives us reconfigurable as well as scalable system. The architecture is fast, reliable, and adaptable and also can maximize the advantages of the algorithm very well. Although the PCE has high frequency and little amount of clock, filtering speed of a firewall also depends on the other components, such as packet FIFO buffer. Fast and reliable FIFO buffer is needed to support the PCE. This PCE also is not completed with rule update mechanism yet. This proposed PCE is tested as a component of FPGA-based firewall to filter Ethernet packet with FPGA DE2 Board using NIOS II platform.
Article
Full-text available
There is a class of packet processing applications that inspect packets deeper than the protocol headers to analyze content. For instance, network security applications must drop packets containing certain malicious Internet worms or computer viruses carried in a packet payload. Content forwarding applications look at the hypertext transport protocol headers and distribute the requests among the servers for load balancing. Packet inspection applications, when deployed at router ports, must operate at wire speeds. With networking speeds doubling every year, it is becoming increasingly difficult for software-based packet monitors to keep up with the line rates. We describe a hardware-based technique using Bloom filters, which can detect strings in streaming data without degrading network throughput. A Bloom filter is a data structure that stores a set of signatures compactly by computing multiple hash functions on each member of the set. This technique queries a database of strings to check for the membership of a particular string. The answer to this query can be false positive but never a false negative. An important property of this data structure is that the computation time involved in performing the query is independent of the number of strings in the database provided the memory used by the data structure scales linearly with the number of strings stored in it. Furthermore, the amount of storage required by the Bloom filter for each string is independent of its length.
Conference Paper
Network Functions Virtualization (NFV) is a new paradigm to move network tasks currently running on dedicated, vendor-specific hardware to elastic, virtualized environments, similar to IaaS cloud computing. A major challenge of NFV is to reach the performance known from dedicated hardware appliances, which often leverage ASIC, FPGA or NPU-based hardware acceleration to increase throughput and reduce delay. In this paper, a framework for elastic provisioning, programming and configuration of acceleration hardware for virtual network functions (VNFs) is proposed. Using this framework, VNFs can offload selected parts of their workload to heterogeneous acceleration processors, which may be shared among all VNF instances in the framework for improved resource utilization.
Conference Paper
We present a new approach for integrating virtualized FPGA-based hardware accelerators into commercial-scale cloud computing systems, with minimal virtualization overhead. Partially reconfigurable regions across multiple FPGAs are offered as generic cloud resources through OpenStack (open-source cloud software), thereby allowing users to "boot" custom designed or predefined network-connected hardware accelerators with the same commands they would use to boot a regular Virtual Machine. We propose a hardware and software framework to enable this virtualization. This is a first attempt at closely fitting FPGAs into existing cloud computing models, where resources are virtualized, flexible, and have the illusion of infinite scalability. Our system can set up and tear down virtual accelerators in approximately 2.6 seconds on average, much faster than regular virtual machines. The static virtualization hardware on the physical FPGAs causes only a three cycle latency increase and a one cycle pipeline stall per packet in accelerators when compared to a non-virtualized system. We present a case study analyzing the design and performance of an application-level load balancer using a fully implemented prototype of our system. Our study shows that FPGA cloud compute resources can easily outperform virtual machines, while the system's virtualization and abstraction significantly reduces design iteration time and design complexity.
Article
The resources of dedicated accelerators (e.g. FPGA) are still required to bridge the gap between software-based Middleboxs(MBs) and the commodity hardware. To consolidate various hardware resources in an elastic, programmable and reconfigurable manner, we design and build a flexible and consolidated framework, OpenANFV, to support virtualized accelerators for MBs in the cloud environment. OpenANFV is seamlessly and efficiently put into Openstack to provide high performance on top of commodity hardware to cope with various virtual function requirements. OpenANFV works as an independent component to manage and virtualize the acceleration resources (e.g. cinder manages block storage resources and nova manages computing resources). Specially, OpenANFV mainly has the following three features. (1)Automated Management. Provisioning for multiple Virtualized Network Functions (VNFs) is automated to meet the dynamic requirements of NFV environment. Such automation alleviates the time pressure of the complicated provisioning and configuration as well as reduces the probability of manually induced configuration errors. (2) Elasticity. VNFs are created, migrated, and destroyed on demand in real time. The reconfigurable hardware resources in pool can rapidly and flexibly offload the corresponding services to the accelerator platform in the dynamic NFV environment. (3) Coordinating with Openstack. The design and implementation of the OpenANFV APIs coordinate with the mechanisms in Openstack to support required virtualized MBs for multiple tenants.