ThesisPDF Available

Resource Allocation and Bandwidth Optimization in SDN-Based Cellular Network


Abstract and Figures

In the most recent years, Fifth Generation (5G) communication is a leading networking paradigm that entails a huge set of capabilities and opportunities. 5G is realized with the aim of hyper-connected devices and omnipresent connectivity among everything with immense data rate and ultra-low latency. This hyperactive networking requires a flexible network architecture that facilitates diversified users’ demands on the fly. Hence, flexibility is the main building block for 5G architectures with major challenges in the designing of new network architecture. Toward 5G realization, Software Defined Networking (SDN) and Network Function Virtualization (NFV) are contemplated as main enablers for the exploration and development of on-demand deployment and network services in a short period. The core idea of SDN is a distinct control plane and data/forwarding plane which performs network orchestration from a logically/physically central controller. Network services and applications are executed on the controller through open Application Programming Interfaces (APIs) which helps in facile inception of new services/applications on the commodity hardware. SDN with the integration of NFV provides end-to-end resources provision and service orchestration for flexible deployment of 5G services. The wireless resources are scarce in nature and need fair allocation for network customer satisfaction. The efficient resource allocation in a growing network is a major challenge. This dissertation focuses on fair resource allocation and bandwidth management in SDN based cellular network. In this dissertation, a framework for dynamic resource allocation and bandwidth management is presented that leverages SDN and NFV in the cellular network. The proposed framework i.e., Novel Policy framework for Resource Allocation (NPRA) framework pulls the benefits of virtualization for resource orchestration in SDN based cellular network. A hierarchical virtualization for resource allocation and bandwidth management is presented in the core and Radio Access Network (RAN) by running multiple logical networks called slices. These logical network slices are centrally abstracted on the SDN controller in the core network and the wireless virtualizer module is used for the wireless resources’ allocation to the respective slices. This thesis mainly consists of two parts. In the first part, a detailed overview of SDN and SDN based cellular architecture is presented and extensive literature is reviewed. In the second part, a detailed NPRA architecture is discussed along with its key components which include traffic load balancing, resource optimization, and traffic flow classification. NPRA framework provides an optimal resource allocation in terms of bandwidth management to fulfill the 5G data rate requirement with the help of SDN and NFV. The aim has been achieved to provide an optimized and efficient resource allocation for the virtualized networks based on their Quality of Services (QoS) requirements. In this dissertation, an integrated framework of SDN and NFV is proposed. Network traffic engineering is performed on in-band data traffic and optimization problem is considered as a non-linear optimization problem. The experimental results show that when network flow classification is obtained using machine learning algorithms, a performance efficiency of 99.63% in the case of Long-Short Term Memory (LSTM) is achieved. Whereas, this performance efficiency is around 95% and 92.55% for Convolutional Neural Network (CNN) and Deep Neural Network (DNN), respectively.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Along with the development of 5G, NS plays an important role in the application of mobile networks to meet all kinds of personalized requirements. In terms of NS concept, network operators can vertically split a physical network into multiple logically separate networks to flexibly meet QoS requirements, which are mainly represented as higher bandwidth and lower latency. In this article, we propose a novel QoS framework of NS in 5G and beyond networks based on SDN and NFV to guarantee key QoS indicators for different application scenarios, such as eMBB, mMTC and URLLC. In this QoS framework, a 5G network is divided into three parts, RAN, TN and CN, to form three types of NS with different network resource allocation algorithms. The performance evaluation in the simulation environment of Mininet shows that the proposed QoS framework can steer different flows into different queues of OVS, schedule network resources for various NS types and provide reliable E2E QoS for users according to preconfigured QoS requirements.
Providing scalable user- and application-aware resource allocation for heterogeneous applications sharing an enterprise network is still an unresolved problem. The main challenges are as follows: (i) How do we define user- and application-aware shares of resources? (ii) How do we determine an allocation of shares of network resources to applications? (iii) How do we allocate the shares per application in heterogeneous networks at scale? In this article, we propose solutions to the three challenges and introduce a system design for enterprise deployment. Defining the necessary resource shares per application is hard, as the intended use case, the user’s environment, e.g., big or small display, and the user’s preferences influence the resource demand. We tackle the challenge by associating application flows with utility functions from subjective user experience models, selected Key Performance Indicators, and measurements. The specific utility functions then enable a mapping of network resources in terms of throughput and latency budget to a common user-level utility scale. A sensible distribution of the resources is determined by formulating a multi-objective mixed integer linear program to solve the throughput- and delay-aware embedding of each utility function in the network for a max-min fairness criteria. The allocation of resources in traditional networks with policing and scheduling cannot distinguish large numbers of classes and interacts badly with congestion control algorithms. We propose a resource allocation system design for enterprise networks based on Software-Defined Networking principles to achieve delay-constrained routing in the network and application pacing at the end-hosts. The system design is evaluated against best effort networks in a proof-of-concept set-up for scenarios with increasing number of parallel applications competing for the throughput of a constrained link. The competing applications belong to the five application classes web browsing, file download, remote terminal work, video streaming, and Voice-over-IP. The results show that the proposed methodology improves the minimum and total utility, minimizes packet loss and queuing delay at bottlenecks, establishes fairness in terms of utility between applications, and achieves predictable application performance at high link utilization.
We present the design, implementation, and evaluation of an API for applications to control a software-defined network (SDN). Our API is implemented by an OpenFlow controller that delegates read and write authority from the network's administrators to end users, or applications and devices acting on their behalf. Users can then work with the network, rather than around it, to achieve better performance, security, or predictable behavior. Our API serves well as the next layer atop current SDN stacks. Our design addresses the two key challenges: how to safely decompose control and visibility of the network, and how to resolve conflicts between untrusted users and across requests, while maintaining baseline levels of fairness and security. Using a real OpenFlow testbed, we demonstrate our API's feasibility through microbenchmarks, and its usefulness by experiments with four real applications modified to take advantage of it.
As the development of new generation mobile communication technology, the mobile core network also needs to be upgraded by new network technologies, e.g., SDN and NFV. With NFV, virtual network functions (VNFs) can be deployed on commodity devices to support various network function requirements. Meanwhile, several VNFs are usually chained as a service function chain to serve a given flow. One fundamental challenge is how to embed SFC for each flow on the shared infrastructure with the goal of minimizing the flow completion time. Furthermore, multiple flows always compete for resources in the shared devices. In this general setting, there is an urgent need to study efficient scheduling mechanism to minimize the total completion time of all flows. In this paper, by jointly considering VNF placement and flow scheduling, we first formulate this problem as an ILP, and further prove that it is NP-hard in general case. We then design a PDG method to find the optimal solution in single flow case and an LRD method to achieve a high-quality feasible solution in multiple flows case. The extensive experiment results indicate that our LRD method can reduce the total completion time by 22.04% and 60.99% against two compared methods, respectively.
Mobile edge computing and network function virtualization (NFV) paradigms enable new flexibility and possibilities of the deployment of extreme low-latency services for Internet-of-Things (IoT) applications within the proximity of their users. However, this poses great challenges to find optimal placements of virtualized network functions (VNFs) for data processing requests of IoT applications in a multi-tier cloud network, which consists of many small- or medium-scale servers, clusters, or cloudlets deployed within the proximity of IoT nodes and a few large-scale remote data centers with abundant computing and storage resources. In particular, it is challenging to jointly consider VNF instance placement and routing traffic path planning for user requests, as they are not only delay sensitive but also resource hungry. In this article, we consider admissions of NFV-enabled requests of IoT applications in a multi-tier cloud network, where users request network services by issuing service requests with service chain requirements, and the service chain enforces the data traffic of the request to pass through the VNFs in the chain one by one until it reaches its destination. To this end, we first formulate the throughput maximization problem with the aim to maximize the system throughput. We then propose an integer linear program solution if the problem size is small; otherwise, we devise an efficient heuristic that jointly takes into account VNF placements to both cloudlets and data centers and routing path finding for each request. For a special case of the problem with a set of service chains, we propose an approximation algorithm with a provable approximation ratio. Next, we also devise efficient learning-based heuristics for VNF provisioning for IoT applications by incorporating the mobility and energy conservation features of IoT devices. We finally evaluate the performance of the proposed algorithms by simulations. The simulation results show that the performance of the proposed algorithms is promising.
Software defined networking (SDN) and network function virtualization (NFV) are the key enabling technologies for service customization in next generation networks to support various applications. In such a circumstance, virtual network function (VNF) scheduling plays an essential role in enhancing resource utilization and achieving better quality-of-service (QoS). In this paper, the VNF scheduling problem is investigated to minimize the makespan (i.e., overall completion time) of all services, while satisfying their different end-to-end (E2E) delay requirements. The problem is formulated as a mixed integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency and accuracy, the original problem is reformulated as a Markov decision process (MDP) problem with variable action set. Then, a reinforcement learning (RL) algorithm is developed to learn the best scheduling policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and captures different execution time of the actions. The reward function in the proposed algorithm is carefully designed to realize delay-aware VNF scheduling. Simulation results are presented to demonstrate the convergence and high accuracy of the proposed approach against other benchmark algorithms.
Conference Paper
In this paper, we present a 5G trace dataset collected from a major Irish mobile operator. The dataset is generated from two mobility patterns (static and car), and across two application patterns (video streaming and file download). The dataset is composed of client-side cellular key performance indicators (KPIs) comprised of channel-related metrics, context-related metrics, cell-related metrics and throughput information. These metrics are generated from a well-known non-rooted Android network monitoring application, G-NetTrack Pro. To the best of our knowledge, this is the first publicly available dataset that contains throughput, channel and context information for 5G networks. To supplement our realtime 5G production network dataset, we also provide a 5G large scale multi-cell ns-3 simulation framework. The availability of the 5G/mmwave module for the ns-3 mmwave network simulator provides an opportunity to improve our understanding of the dynamic reasoning for adaptive clients in 5G multi-cell wireless scenarios. The purpose of our framework is to provide additional information (such as competing metrics for users connected to the same cell), thus providing otherwise unavailable information about the eNodeB environment and scheduling principle, to end user. Our framework, permits other researchers to investigate this interaction through the generation of their own synthetic datasets.