Steven Van Rossem’s research while affiliated with Ghent University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (24)


Fig. 4 The Curve fit model exemplified in two example VNF subsets. For each plot, the y-axis on the left is for the saturating resource usage, the one on the right depicts the increasing KPI value. The boundary between (non) saturated regions is where the workload covariance with the KPI becomes larger than with the resource usage.
The profiling framework can use a similar infrastructure compared to operations, as part of a DevOps workflow
Variation in the ΔRMSE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta RMSE$$\end{document}, to be used as stop criterion for the sampling
a The sampling workflow with b simplified representation of the sampled configuration space
Optimized sampling strategies compared to uniform sampling. Less samples are needed to achieve a stable and high model accuracy

+8

Optimized Sampling Strategies to Model the Performance of Virtualized Network Functions
  • Article
  • Publisher preview available

October 2020

·

114 Reads

·

13 Citations

Journal of Network and Systems Management

Steven Van Rossem

·

·

·

[...]

·

Modern network services make increasing use of virtualized compute and network resources. This is enabled by the growing availability of softwarized network functions, which take on major roles in the total traffic flow (such as caching, routing or as firewall). To ensure reliable operation of its services, the service provider needs a good understanding of the performance of the deployed softwarized network functions. Ideally, the service performance should be predictable, given a certain input workload and a set of allocated (virtualized) resources (such as vCPUs and bandwidth). This helps to estimate more accurately how much resources are needed to operate the service within its performance specifications. To predict its performance, the network function should be profiled in the whole range of possible input workloads and resource configurations. However, this input can span a large space of multiple parameters and many combinations to test, resulting in an expensive and overextended measurement period. To mitigate this, we present a profiling framework and a sampling heuristic to help select both workload and resource configurations to test. Additionally, we compare several machine-learning based methods for the best prediction accuracy, in combination with the sampling heuristic. As a result, we obtain a reduced dataset which can still model the performance of the network functions with adequate accuracy, while requiring less profiling time. Compared to uniform sampling, our tests show that the heuristic achieves the same modeling accuracy with up to five times less samples.

View access options

VNF Performance Modelling: from Stand-alone to Chained Topologies

July 2020

·

49 Reads

·

14 Citations

Computer Networks

One of the main incentives for deploying network functions on a virtualized or cloud-based infrastructure, is the ability for on-demand orchestration and elastic resource scaling following the workload demand. This can also be combined with a multi-party service creation cycle: the service provider sources various network functions from different vendors or developers, and combines them into a modular network service. This way, multiple virtual network functions (VNFs) are connected into more complex topologies called service chains. Deployment speed is important here, and it is therefore beneficial if the service provider can limit extra validation testing of the combined service chain, and rely on the provided profiling results of the supplied single VNFs. Our research shows that it is however not always evident to accurately predict the performance of a total service chain, from the isolated benchmark or profiling tests of its discrete network functions. To mitigate this, we propose a two-step deployment workflow: First, a general trend estimation for the chain performance is derived from the stand-alone VNF profiling results, together with an initial resource allocation. This information then optimizes the second phase, where online monitored data of the service chain is used to quickly adjust the estimated performance model where needed. Our tests show that this can lead to a more efficient VNF chain deployment, needing less scaling iterations to meet the chain performance specification, while avoiding the need for a complete proactive and time-consuming VNF chain validation.



Profile-based Resource Allocation for Virtualized Network Functions

November 2019

·

35 Reads

The virtualization of compute and network resources enables an unseen flexibility for deploying network services. A wide spectrum of emerging technologies allows an ever-growing range of orchestration possibilities in cloud-based environments. But in this context it remains challenging to rhyme dynamic cloud configurations with deterministic performance. The service operator must somehow map the performance specification in the Service Level Agreement (SLA) to an adequate resource allocation in the virtualized infrastructure. We propose the use of a VNF profile to alleviate this process. This is illustrated by profiling the performance of four example network functions (a virtual router, switch, firewall and cache server) under varying workloads and resource configurations. We then compare several methods to derive a model from the profiled datasets. We select the most accurate method to further train a model which predicts the services' performance, in function of incoming workload and allocated resources. Our presented method can offer the service operator a recommended resource allocation for the targeted service, in function of the targeted performance and maximum workload specified in the SLA. This helps to deploy the softwarized service with an optimal amount of resources to meet the SLA requirements, thereby avoiding unnecessary scaling steps.


Profile-Based Resource Allocation for Virtualized Network Functions

September 2019

·

57 Reads

·

54 Citations

IEEE Transactions on Network and Service Management

The virtualization of compute and network resources enables an unseen flexibility for deploying network services. A wide spectrum of emerging technologies allows an ever-growing range of orchestration possibilities in cloud-based environments. But in this context it remains challenging to rhyme dynamic cloud configurations with deterministic performance. The service operator must somehow map the performance specification in the Service Level Agreement (SLA) to an adequate resource allocation in the virtualized infrastructure. We propose the use of a VNF profile to alleviate this process. This is illustrated by profiling the performance of four example network functions (a virtual router, switch, firewall and cache server) under varying workloads and resource configurations. We then compare several methods to derive a model from the profiled datasets. We select the most accurate method to further train a model which predicts the services’ performance, in function of incoming workload and allocated resources. Our presented method can offer the service operator a recommended resource allocation for the targeted service, in function of the targeted performance and maximum workload specified in the SLA. This helps to deploy the softwarized service with an optimal amount of resources to meet the SLA requirements, thereby avoiding unnecessary scaling steps.



The Next Generation Platform as A Service: Composition and Deployment of Platforms and Services

May 2019

·

436 Reads

·

13 Citations

The emergence of widespread cloudification and virtualisation promises increased flexibility, scalability, and programmability for the deployment of services by Vertical Service Providers (VSPs). This cloudification also improves service and network management, reducing the Capital and Operational Expenses (CAPEX, OPEX). A truly cloud-native approach is essential, since 5G will provide a diverse range of services - many requiring stringent performance guarantees while maximising flexibility and agility despite the technological diversity. This paper proposes a workflow based on the principles of build-to-order, Build-Ship-Run, and automation; following the Next Generation Platform as a Service (NGPaaS) vision. Through the concept of Reusable Functional Blocks (RFBs), an enhancement to Virtual Network Functions, this methodology allows a VSP to deploy and manage platforms and services, agnostic to the underlying technologies, protocols, and APIs. To validate the proposed workflow, a use case is also presented herein, which illustrates both the deployment of the underlying platform by the Telco operator and of the services that run on top of it. In this use case, the NGPaaS operator facilitates a VSP to provide Virtual Network Function as a Service (VNFaaS) capabilities for its end customers.


Introducing Development Features for Virtualized Network Services

August 2018

·

1 Read

Network virtualization and softwarizing network functions are trends aiming at higher network efficiency, cost reduction and agility. They are driven by the evolution in Software Defined Networking (SDN) and Network Function Virtualization (NFV). This shows that software will play an increasingly important role within telecommunication services, which were previously dominated by hardware appliances. Service providers can benefit from this, as it enables faster introduction of new telecom services, combined with an agile set of possibilities to optimize and fine-tune their operations. However, the provided telecom services can only evolve if the adequate software tools are available. In this article, we explain how the development, deployment and maintenance of such an SDN/NFV-based telecom service puts specific requirements on the platform providing it. A Software Development Kit (SDK) is introduced, allowing service providers to adequately design, test and evaluate services before they are deployed in production and also update them during their lifetime. This continuous cycle between development and operations, a concept known as DevOps, is a well known strategy in software development. To extend its context further to SDN/NFV-based services, the functionalities provided by traditional cloud platforms are not yet sufficient. By giving an overview of the currently available tools and their limitations, the gaps in DevOps for SDN/NFV services are highlighted. The benefit of such an SDK is illustrated by a secure content delivery network service (enhanced with deep packet inspection and elastic routing capabilities). With this use-case, the dynamics between developing and deploying a service are further illustrated.


Dev-for-Operations and Multi-sided Platform for Next Generation Platform as a Service

July 2018

·

214 Reads

This paper presents two new challenges for the Telco ecosystem transformation in the era of cloud-native microservice-based architectures. (1) Development-for-Operations (Dev-for-Operations) impacts not only the overall workflow for deploying a Platform as a Service (PaaS) in an open foundry environment, but also the Telco business as well as operational models to achieve an economy of scope and an economy of scale. (2) For that purpose, we construct an integrative platform business model in the form of a Multi-Sided Platform (MSP) for building Telco PaaSes. The proposed MSP based architecture enables a multi-organizational ecosystem with increased automation possibilities for Telco-grade service creation and operation. The paper describes how the Dev-for-Operations and MSP lift constraints and offers an effective way for next-generation PaaS building, while mutually reinforcing each other in the Next Generation Platform as a Service (NGPaaS) framework.


Fig 2: Network Service (or PaaS) decomposed as tree of RFBs.
Fig. 3. RFB Hierarchical Architecture Overview Fig 4 shows the key internal components. The ROBA uses OSS and BSS APIs to communicate with upper and lower OSS/BSS components. A public shared interface is used for ROBM-ROBA communication, and ensures inter-ROBA communication and connectivity with external OSS/BSS. At the ROBA Domain level, a private API serves to interconnect intra-Domain internal components and to communicate with the EE (the EE proxy registers and connects to the local Execution Environment), while the analytics plugin provides an extension to external analytics microservices. In the ROBM, the "Inventory proxy" registers and connects to the Dynamic Inventory to manage real-time global resource and infrastructure usage. The ROBM then delegates local resource management to ROBAs. The "OSS/BSS proxy" component maintains connectivity with existing legacy OSS/BSS systems.
Fig. 4. ROBM (left) and ROBA (right) Internal Architecture
Re-Factored Operational Support Systems for the Next Generation Platform-as-a-Service (NGPaaS)

July 2018

·

238 Reads

·

4 Citations

Platform-As-A-Service (PaaS) systems offer customers a rich environment in which to build, deploy, and run applications. Today's PaaS offerings are tailored mainly to the needs of web and mobile applications developers, and involve a fairly rigid stack of components and features. The vision of the H2020 5GPPP Phase 2 Next Generation Platform-as-a-Service (NGPaaS) project is to enable "build-to-order" customized PaaSs, tailored to the needs of a wide range of use cases with telco-grade 5G characteristics. This paper sets out the salient and innovative features of NGPaaS and explores the impacts on Operational Support Systems and Business Support Systems (OSS/BSS), moving from fixed centralized stacks to a much more flexible and modular distributed architecture.


Citations (19)


... The authors in [3] discuss how the quality and quantity of the data gathered from monitoring and benchmarking systems influence the quality of the decision (QoD) for orchestration operations including LCM actions and respectively the performance of the provisioned services in the virtualized infrastructure. Taking into consideration the size of a VNFI and the scale of the resources and components required to be monitored results in a very high load that must be delivered periodically by the monitoring system, thereby leading to a high processing load and time in profiling due to large data processing and analysis activities [25]. To address this monitoring complexity and high load to exhaustively benchmark and test the performance of VNFs in all possible situations we must select a representative subset of infrastructure and workload configurations to profile the VNF [26]. ...

Reference:

Autonomous Intelligent VNF Profiling for Future Intelligent Network Orchestration
Automated monitoring and detection of resource-limited NFV-based services
  • Citing Conference Paper
  • July 2017

... However, research studies in Refs. [11] [12] [13] [14] [15] have shown that the performance of chained VNFs can differ from that of standalone VNFs. It is crucial to take this into account when profiling both individual VNFs and the SFC as a whole. ...

Adaptive & Learning-aware Orchestration of Content Delivery Services
  • Citing Conference Paper
  • June 2020

... However, these approaches may not consider optimal KPIs and pre-defined resource configurations. Last, other notable contributions include the NFV-Inspector [20] automated profiling and analysis platform, and the work of [19] utilising ML techniques such as Interpolation, Gaussian Process, ANN, and Linear Regression for VNF profiling. ...

VNF Performance Modelling: from Stand-alone to Chained Topologies
  • Citing Article
  • July 2020

Computer Networks

... In this complex scene, Machine Learning (ML) based tools can lead to optimal management of the available resources [4][5][6][7][8]. Assuming that any type of service has specific characteristics (i.e., different injection patterns), the service profile can be automatically derived via ML techniques and thus, the system can be equipped with the ability of "learning" new service profiles as they emerge. ...

Optimized Sampling Strategies to Model the Performance of Virtualized Network Functions

Journal of Network and Systems Management

... Firstly, it includes managing enormous quantities of raw measurement data acquired via profiling [11]. Secondly, this relationship often exhibits non-linear features [12], potentially requiring complex models to adequately fit the measured data. In this context, machine learning is an excellent candidate for modeling this relationship. ...

Profile-Based Resource Allocation for Virtualized Network Functions
  • Citing Article
  • September 2019

IEEE Transactions on Network and Service Management

... For more detailed information, readers are referred to other comprehensive works on the subject. [479][480][481] ML algorithms are sophisticated programs that learn patterns from existing data using logic and mathematics. They extract features from complex output signals and are instrumental in prediction, classification, and clustering, thereby enhancing the accuracy of nanosensor detection. ...

The Next Generation Platform as A Service: Composition and Deployment of Platforms and Services

... erted from human touch to machine touch. Automation also helps in reducing defects as human interactions are limited. There are many advances in technologies that help an organization automate tasks. Automation can be done in design as well as for the entire remaining lifecycle of a product. Infrastructure plays a vital role in enabling automation (Van Rossem et. al 2018). With the evolution of technology in electronics, computers, information technology, and internet technologies, many tasks can be automated and remotely monitored with very little human touch. With the help of the right infrastructure, continuous development of products with the best quality and the shortest time to market can be easy ( ...

A Vision for the Next Generation Platform-as-a-Service
  • Citing Conference Paper
  • July 2018

... Responsible for inventory registration, global supervision, and deployment of services and platforms on their execution environments. In our prototype, the BSS/OSS role [26] is fulfilled by the RDCL 3D tool. ...

Re-Factored Operational Support Systems for the Next Generation Platform-as-a-Service (NGPaaS)

... A number of interventions of latest technologies have already begun to get incorporated in this regard and started to benefit many applications in different fields, such as, healthcare, agriculture, elderly emergency service, transportation, smart city, and industry [10][11][12][13][14][15][16][17][18][19][20][21]. 5G, next generation internet, tactile internet, and fog computing have been merged with the stated aspect to leverage futuristic network service provisioning. ...

The Next Generation Platform as a Service Cloudifying Service Deployments in Telco-Operators Infrastructure

... This solution is suitable only for particular use cases, and it cannot be easily adapted to other real-world situations. Furthermore, relevant related contributions are proposed by Guija and Siddiqui [27], Xu et al. [24] and Soenen et al. [28], who describe 5G-oriented software platforms that include the required authentication and authorisation mechanisms. ...

Insights from SONATA: Implementing and integrating a microservice-based NFV service platform with a DevOps methodology
  • Citing Conference Paper
  • April 2018