Journal of Network and Systems Management

Published by Springer Nature
Online ISSN: 1573-7705
Learn more about this page
Recent publications
Article
In 5G network slicing environments, if insufficient resources are allocated to the associated resource-intensive service slices, its quality of service (QoS) can be considerably degraded. In this paper, we propose a priority-based dynamic resource allocation scheme (PDRAS), in which a resource management agent maintains slicing information such as priorities, demand profiles, and average resource adjustment time to change allocated resources to slices. To maximize QoS of slices while maintaining the total amount of allocated resources below a certain level, a constrained Markov decision process problem is formulated and the optimal allocation policy is obtained using linear programming. Extensive evaluation results demonstrate that PDRAS with the optimal policy has better performance regarding QoS and the resource usage efficiency compared with other schemes.
 
Article
Stream Control Transmission Protocol (SCTP) exploits multiple network interfaces to provide multi-streaming and data chunk ordering in a stream. An extended feature of SCTP, i.e., Concurrent Multi-path Transfer (CMT), bids concurrent data transmission in a multi-path data transfer environment and guarantees bandwidth aggregation, load sharing, robustness, and reliability. In such an environment, the paths usually have distinct characteristics (i.e., delay, Packet Loss Rate (PLR), and bandwidth). Thus, data chunks are received out-of-ordered at the destination. As a result, CMT causes excessive receiver buffer blocking and unnecessary congestion window ( cwnd ) reductions. Also, during the selection of the retransmission destination path (to resend a lost data chunk), CMT does not take into account vital Quality of Service (QoS) parameters such as the PLR of a path under consideration. This paper introduces a new Delay-Based Concurrent Multi-path Transfer (DB-CMT) approach that transmits data on multiple paths according to their delay. In this scheme, we present a Delay-Based Data chunk Scheduling Policy (DB-DSP), a Retransmission Path Selection Policy (RTX-CL), and a new Delay-Based Fast Retransmission Policy (DB-FRP). The simulation results show that the DB-CMT’s RTX-CL policy performs better than the well-known RTX-CWND and RTX-LOSSRATE retransmission schemes. Also, the overall performance of DB-CMT witnesses improved throughput, fewer timeouts, and reduced File Transfer Time (FTT) performances.
 
Article
Enforcing network slice isolation in 5G Radio Access Networks (RANs) can degrade a network slicing solution’s ability to efficiently support different types of traffic. In addition, competing virtual network operators may share one network infrastructure that forces network slicing solutions to provide fair treatment between them. This work presents Flexible Priority Scheduling (FPS). This RAN network slicing solution provides isolated treatment to slices with different traffic types, further exploring their needs and flexibility to achieve a more efficient solution. It does so by defining a contract interface that allows a flexible representation of different types of traffic, from heavy throughput services to low latency and bursty traffic. Furthermore, the presented contract representation allows a fair treatment of the tenant’s traffic in the Medium Access Control (MAC) scheduler that enforces this slicing solution: Priority Adaptation Slice Scheduler (PASS).
 
Article
Public blockchains, like Ethereum, rely on an underlying peer-to-peer (P2P) network to disseminate transactions and blocks between nodes. With the rise of blockchain applications and cryptocurrencies values, they have become critical infrastructures which still lack comprehensive studies. In this paper, we propose to investigate the reliability of the Ethereum P2P network. We developed our own dependable crawler to collect information about the peers composing the network. Our data analysis regarding the geographical distribution of peers and the churn rate shows good network properties while the network can exhibit a sudden and major increase in size and peers are highly concentrated on a few ASes. In a second time, we investigate suspicious patterns that can denote a Sybil attack. We find that many nodes hold numerous identities in the network and could become a threat. To mitigate future Sybil attacks, we propose an architecture to detect suspicious nodes and revoke them. It is based on a monitoring system, a smart contract to propagate the information and an external revocation tool to help clients remove their connections to suspicious peers. Our experiment on Ethereum’s Test network proved that our solution is effective.
 
Article
Multi-access edge computing (MEC) is a key enabler to fulfill the promises of a new generation of immersive and low-latency services in 5G and Beyond networks. MEC represents a defining function of 5G, offering significant computational power at a reduced latency, allowing to augment the capabilities of user equipments while preserving their battery life. However, the demands generated by a plethora of innovative and concurrent IT services requiring high quality of service and quality of experience levels will likely overwhelm the—albeit considerable—resources available in 5G and Beyond scenarios. To take full advantage of its potential, MEC needs to be paired with innovative resource management solutions capable of effectively addressing the highly dynamic aspects of the scenario and of properly considering the heterogeneous and ever-changing nature of next generation IT services, prioritizing the assignment of resources in a highly dynamic and contextual fashion. This calls for the adoption of Artificial Intelligence based tools, implementing self-* approaches capable of learning the best resource management strategy to adapt to the ever changing conditions. In this paper, we present MECForge, a novel solution based on deep reinforcement learning that considers the maximization of total value-of-information delivered to end-user as a coherent and comprehensive resource management criterion. The experimental evaluation we conducted in a simulated but realistic environment shows how the Deep Q-Network based algorithm implemented by MECForge is capable of learning effective autonomous resource management policies that allocate service components to maximize the overall value delivered to the end-users.
 
Article
Event tickets being sold in their electronic instances are subject to counterfeiting, profiteering, and black markets. Therefore, suitable service management mechanisms are required to overcome such deficits. This work designs, develops, and evaluates the approach of a Decentralized Ticketing platform—called DeTi—for managing the distribution of electronic event tickets and “regulating” the aftermarket. DeTi offers a dedicated service management functionality by operating through Smart Contracts of Ethereum, such that users can verify tickets’ validity for a given event. Especially, a new mechanism for users to detect fraudulent events is introduced, too. The evaluation performed indicates that DeTi invalidates or validates tickets efficiently via its decentralized and BC-based service management approach. By securing technically a set of underlying processes, DeTi obviates forging, replication, and scalping of tickets, allowing for a well-managed resale ecosystem of tickets based on and limited to the organizers’ initial pricing.
 
Article
5G cellular network is becoming a preferred technology for communication in deployments of the Internet of Things (IoT). However, 5G cellular network is essentially designed for cellular communication. Therefore, there are several areas in IoT over 5G cellular network, where there is scope for further improvement. One such area is development of efficient security mechanisms for authentication of the IoT devices. The authentication protocol currently being used in 5G cellular network maintains the security credentials of the devices at the home network in a centralised manner. Therefore, every time a device needs to be authenticated, the home network has to be communicated. However, considering the IoT scenario where there is large scale deployment of devices, frequent communication with the home network may result in increased communication latency. In this paper, we propose an authentication scheme where security credentials of the IoT devices are stored in a decentralized way using blockchain technology. The scheme is found to be safe and secured, through an informal and a formal security analysis using scyther tool. The smart contracts used in the scheme, when deployed in the ethereum test network, are also found to be efficient. Through experimental performance analysis, the scheme is found to have attained reduced communication latency compared to the existing protocols.
 
Article
As the host-centric TCP/IP network is hard to fulfill the new network requirements, the Information Center Networking (ICN) emerged. The communication mode of ICN increases the volatility and complexity of the traffic, so how to efficiently carry out traffic scheduling of this complex network has undoubtedly become the key and core problem in ICN. Delay and throughput are indispensable indicators to measure the network performance. However, there is no one to synchronously optimize and balance delay and throughput in the existing work. To solve this problem, based on the multi-objective genetic algorithm Non-dominated Sorting-based Algorithm II, we propose Density Sorting-Based Evolutive Algorithm (DSEA) to optimize the delay and throughput at the same time. The simulation results show that, compared with the existing multi-objective genetic algorithms, DSEA has better performance in evaluation indexes Generational Distance, Inverted Generational Distance, Hypervolume, and Pareto front. And it can reduce the delay by 30.01%, improve the throughput by 43.37% under the premise of a good balance between the two indicators.
 
Article
The IEEE 802.15.4 Time Slotted Channel Hopping (TSCH) communication mode is a key standard in the Industrial Internet of Things (IIoT). To schedule communications, TSCH uses deterministic transmissions to deal with latency requirements and channel hopping to cope with interference in IIoT environments. Nonetheless, this latter might not be suffcient to ensure reliable delivery of critical data since industrial networks are prone to severe external interference, which impacts the quality of wireless channels. In this paper, we propose an effective local Channel Selection approach for Reliable communication Scheduling in TSCH networks, dubbed CSRS. CSRS leans on effective assessment metrics to estimate the quality of available communication channels and stateless local exchange of bad-channels blacklists. CSRS is schedule-independent; hence it can be combined with any TSCH schedule, including the standardized Minimal Scheduling Function (MSF), to reduce the negative impact of bad channels. CSRS integration with MSF is implemented in Contiki and validated through extensive realistic trace-based simulations and public testbed experiments. Obtained results demonstrate the efficiency of our proposal in terms of reliability, latency, and energy consumption when compared with state-of-the-art solutions.
 
Article
Network function virtualization (NFV) makes the realization of specific network functions no longer depend on inherent hardware by executing virtual network functions (VNFs), but realizes network functions in a more flexible programming manner, thereby reducing the pressure of resource allocation on the underlying network. Service function chain (SFC) is composed of a set of fixed order VNFs. These VNFs need to be deployed on appropriate physical nodes to meet user function requirements, i.e., the placement of SFC. Traditional solutions mostly use mathematical models or heuristic methods, which are not applicable in the context of large-scale networks. Secondly, the existing methods do not integrate intelligent learning algorithms into the service function chain placement (SFCP) problem, which limits the possibility of obtaining better solutions. This paper presents a multi-objective optimization service function chain placement (MOO-SFCP) algorithm based on reinforcement learning (RL). The goal of the algorithm is to optimize the resource allocation mode, including several performance indexes such as underlying resource consumption revenue, revenue cost ratio, VNF acceptance rate and network latency. We model the SFCP as a Markov decision process (MDP), and use a two-layer policy network as an intelligent agent. In the training stage of RL, the agent comprehensively considers the optimization objectives and formulates the optimal physical node mapping strategy for VNF requests. In the test phase, the whole SFCP is completed according to the trained node mapping strategy. Simulation results show that the algorithm proposed in this paper has excellent performance in the aspects of underlying resource allocation revenue, VNF acceptance rate and so on. In addition, we prove that the algorithm has good flexibility by changing the delay constraint.
 
Article
The emergence of blockchain technology and cryptocurrencies opened the possibility for building novel peer-to-peer(P2P) resource allocation and sharing models. However, the trustless nature of these P2P models creates the need for reliable and effective trust and reputation mechanisms to minimize the risk of accessing or interacting with malicious peers. Blockchain technology, which is renowned for ensuring trust in trustless environments, provides us with new mechanisms to overcome the weaknesses of the existing reputation and trust management protocols. This paper proposes BTrust, an innovative decentralized and modular trust management system based on blockchain technology for evaluating trust in large-scale P2P networks. To quantify and assess the trustworthiness of peers and identify malicious peers, BTrust introduces a multi-dimensional trust and reputation model to represent trust and reputation scores in a single value derived from multiple parameters with appropriate weightings. Other contributions of this paper include the combination of recommendation and evidence-based approaches into a single system to provide a reliable and versatile way to compute trust in the network, an optimized trustless bootstrapping process to select trustworthy peers among neighbour peers and an incentive mechanism to encourage truthful feedback. We implement and evaluate the BTrust protocol using simulations and show that BTrust is highly resilient to failures and robust against malicious nodes
 
Article
As one of the significant features in the cache-enabled networks, in-network caching improves the efficiency of content dissemination by offloading content from the remote content provider onto the network that is closer to the users, and creates an opportunity for multicast. Since the multicast paradigm is a promising method for sending data to multiple users while saving bandwidth, this paper exploits the multi-rate multicast paradigm to address the caching problem in the cache-enabled environment which jointly considers the content caching and transmission rate allocation. The pure caching design, which is developed in most existing works, can only achieve limited performance; therefore, we argue that caching should be jointly designed. Our joint model can accommodate the diverse requirements of users by serving them with content in the nearby cache at different transmission rates, which is different from the single-rate multicast paradigm where users can only share the same rate in the same multicast group. We prove that the proposed maximization problem is a biconvex optimization problem. To solve this problem, we exploit the decomposable structure of the joint maximization to develop a heuristic solution that consists of caching decision and rate allocation algorithms. We reduce the search space of the heuristic algorithm to further reduce computation and communication complexity. We carry out an extensive packet-level simulation to evaluate the performance of our proposal compared with some benchmark schemes. Simulation results show that the proposed heuristic algorithm performs well compared with the benchmarks.
 
Article
IoT applications have become a pillar for enhancing the quality of life. However, the increasing amount of data generated by IoT devices places pressure on the resources of traditional cloud data centers. This prevents cloud data centers from fulfilling the requirements of IoT applications, particularly delay-sensitive applications. Fog computing is a relatively recent computing paradigm that extends cloud resources to the edge of the network. However, task scheduling in this computing paradigm is still a challenge. In this study, a semidynamic real-time task scheduling algorithm is proposed for bag-of-tasks applications in the cloud–fog environment. The proposed scheduling algorithm formulates task scheduling as a permutation-based optimization problem. A modified version of the genetic algorithm is used to provide different permutations for arrived tasks at each scheduling round. Then, the tasks are assigned, in the order defined by the best permutation, to a virtual machine, which has sufficient resources and achieves the minimum expected execution time. A conducted optimality study reveals that the proposed algorithm has a comparative performance with respect to the optimal solution. Additionally, the proposed algorithm is compared with first fit, best fit, the genetic algorithm, and the bees life algorithm in terms of makespan, total execution time, failure rate, average delay time, and elapsed run time. The experimental results show the superiority of the proposed algorithm over the other algorithms. Moreover, the proposed algorithm achieves a good balance between the makespan and the total execution cost and minimizes the task failure rate compared to the other algorithms. Graphical Abstract
 
Article
Cloud service providers rely on bandwidth overprovisioning to avoid Service Level Agreements’ violation (SLAs) when allocating tenants’ resources in multitenant cloud environments. Tenants’ network usage is usually dynamic, but the shared resources are often allocated statically and in batches, causing resource idleness. This paper envisions an opportunity for optimizing cloud service networks. As such, we propose an autonomous bandwidth allocation mechanism based on Fuzzy Reinforcement Learning (FRL) to reduce the idleness of cloud network resources. Our mechanism dynamically allocates resources, prioritizing tenants and allowing them to exceed the contracted bandwidth temporarily without violating the SLAs. We assess our mechanism by comparing FRL usage against pure Fuzzy Inference System (FIS) and pure Reinforcement Learning (RL). The evaluation scenario is an emulation in which tenants share resources from a cloud provider and generate traffic based on real HTTP traffic. The results show that our mechanism increases tenant’s cloud network utilization by 30% compared to FIS while maintaining the cloud traffic load within a healthy threshold and more stable than RL.
 
Article
Border Gateway Protocol (BGP), the default inter-domain routing protocol on the Internet, lacks inherent mechanisms to validate the prefix ownership and integrity of inter-domain routes exchanged among multiple domains, resulting in BGP hijack attacks. Conventional security approaches such as RPKI and BGPSec are centralized and complex by nature, and require changes to existing routing infrastructure. In recent times, blockchain based solutions are proposed for validating the routing information exchanged across different domains in a decentralized manner. However, because of lower transaction throughput, longer confirmation time and huge storage overhead, the existing solutions are not suitable for validating the routing information exchanged among domains, where a large number of prefix allocations and BGP route advertisements are recorded as transactions on the blockchain. This work proposes an Inter-domain Prefix and Route Validation (IPRV) framework for validating prefix ownership and inter-domain routes exchanged among the domains on the Internet. IPRV leverages (a) Fast and Scalable Directed Acyclic Graph-based Distributed Ledger (FSD2L) to record transactions corresponding to the prefix allocations and BGP route advertisements made by different domains on the Internet, and (b) Route Validation Nodes (RVNs) which maintain FSD2L to provide prefix and route validation services to the BGP routers within a domain. IPRV framework is implemented and verified using docker containers, and the simulations performed on large inter-domain networks showed that the proposed IPRV framework using RVNs and FSD2L achieves high transaction throughput while minimizing the storage consumption of the FSD2L.
 
Article
This paper introduces a data trading system based on the blockchain network, where a trusted data aggregator collects data from the Internet of Things (IoT) device owners and sells them in the format of different packages to multiple buyers. In this paper, we formulate infinitely repeated games between rational buyers that are competing with each other to obtain the required data records. Buyers update their bidding strategies to maximize their profits based on the outcome of previous games. We validate the existence and uniqueness of the Nash equilibrium in a one-shot game, finite, and infinitely repeated games. To ensure data owners’ privacy, a novel trust mechanism design is used to impede untruthful buyers to win the game. To prevent the use of a third party such as an auctioneer, all of these methods are implemented as smart contracts on the Hyperledger blockchain. We provide extensive analysis to demonstrate that the proposed system satisfies the properties of completeness, soundness, computationally efficiency, truthfulness, budget balance, and individual rationality. Lastly, we provide simulation experiments to demonstrate the performance of our blockchain network using different metrics, such as transaction throughput, latency, and resource consumption under different parameters.
 
Article
Blockchain (BC) and Software-Defined Networking (SDN) are leading technologies which have recently found applications in several network-related scenarios and have consequently experienced a growing interest in the research community. Indeed, current networks connect a massive number of objects over the Internet and in this complex scenario , to ensure security, privacy, confidentiality, and programmability, the utilization of BC and SDN have been successfully proposed. In this work, we provide a comprehensive survey regarding these two recent research trends and review the related state-of-the-art literature. We first describe the main features of each technology and discuss their most common and used variants. Furthermore, we envision the integration of such technologies to jointly take advantage of these latter efficiently. Indeed , we consider their group-wise utilization-named BC-SDN-based on the need for stronger security and privacy. Additionally, we cover the application fields of these technologies both individually and combined. Finally, we discuss the open issues of reviewed research and describe potential directions for future avenues regarding the integration of BC and SDN. To summarize, the contribution of the present survey spans from an overview of the literature background on BC and SDN to the discussion of the benefits and limitations of BC-SDN integration in different fields, which also raises open challenges and possible future avenues examined herein. To the best of our knowledge, compared to existing surveys, this is the first work that analyzes the aforementioned aspects in light of a broad BC-SDN integration, with a specific focus on security and privacy issues in actual utilization scenarios.
 
Article
IoT applications have become a pillar for enhancing the quality of life. However, the increasing amount of data generated by IoT devices places pressure on the resources of traditional cloud data centers. This prevents cloud data centers from fulflling the requirements of IoT applications, particularly delay-sensitive applications. Fog computing is a relatively recent computing paradigm that extends cloud resources to the edge of the network. However, task scheduling in this computing paradigm is still a challenge. In this study, a semidynamic real-time task scheduling algorithm is proposed for bag-of-tasks applications in the cloud–fog environment. The proposed scheduling algorithm formulates task scheduling as a permutation-based optimization problem. A modified version of the genetic algorithm is used to provide different permutations for arrived tasks at each scheduling round. Then, the tasks are assigned, in the order defined by the best permutation, to a virtual machine, which has sufficient resources and achieves the minimum expected execution time. A conducted optimality study reveals that the proposed algorithm has a comparative performance with respect to the optimal solution. Additionally, the proposed algorithm is compared with first fit, best ft, the genetic algorithm, and the bees life algorithm in terms of makespan, total execution time, failure rate, average delay time, and elapsed run time. The experimental results show the superiority of the proposed algorithm over the other algorithms. Moreover, the proposed algorithm achieves a good balance between the makespan and the total execution cost and minimizes the task failure rate compared to the other algorithms.
 
Article
In spite of several promising attributes that motivate industry players to adopt visible light communication (VLC), its strong coverage limitation is a major disincentive. A reasonable way to overcome this bottleneck in indoor environments is to capitalize on the prevailing wireless local area network (WLAN) to boost reliable mobile access. To this end, a vertical handover (VHO) scheme is required to smoothly merge VLC networks and WLANs. Thus, this article presents an adaptive VHO scheme which considers the cause of a VLC link disruption and the types of applications running in a user device (UD) to make appropriate handover decisions. The proposed hybrid application-aware VHO (HA-VHO) technique combines three approaches (immediate handover, static dwell timing and dynamic dwell timing), in order to improve the overall handover outcome. Performance comparisons with other VHO designs (immediate VHO, dwell VHO and channel adaptive dwell VHO) reveal that the HA-VHO scheme provides higher data rates in most cases, while adaptively reducing the total signaling cost of inter-network handovers. What is more, it can provide relatively high quality of experience (QoE) for real-time applications.
 
Article
The Software-defined network (SDN) is a technique to design and manage a network that allows dynamic and programmatically functional configuration of network intending to improve the performance and monitor the system to make it comparable to the cloud computing than traditional types of network management. The SDNs comprise various switches and several controllers that lead the switches' data to a station or other controllers. One of the main challenges in the SDNs is seeking a fair number of controllers and optimal places for deploying them, known as controller placement problems. Depending on the network requirements, various criteria (e.g., installation cost, latency, load balancing, etc.) have been proposed to find the best places to install the controllers. The so-called problem that has attracted researchers' attention is formulated in the form of an optimization problem of multi-objective type. A novel multi-objective version of the Marine Predator Algorithm (MOMPA) was introduced in the current paper. The MOMPA was then hybridized with the Non-dominated Sorting Genetic Algorithm-II innovatively. Next, the proposed hybrid algorithm is discretized with mutation and crossover operators. Afterwards, the proposed hybrid discrete multi-objective algorithm was exploited to solve the controller placement problem. Henceforth, the proposed algorithm was applied to several real-world software-defined networks and was compared with some state-of-the-art algorithms regarding LC−S, LC−C, Imbalance, SP, and obtained Pareto members. The results of the comparisons demonstrated the superiority of the proposed controller placement algorithm.
 
Article
Software defined Data Center Network (SDDCN) architectures need flexibility, scalability, and improved analytics for reliable networks. However, as the traffic load of the network grows a substantial collapse in the network performance has been observed. Huge amount of network broadcasts mainly Address Resolution Protocol (ARPs), is the dominant factor contributing towards this performance degradation. However, to the best of our knowledge existing solutions do not focus on reducing the redundant ARPs processed by controller especially for loop topologies of large scale SDDCNs. We propose a framework ARP Overhead Reduction (ARP-OR) to not only reduce the ARP broadcasts more effectively but also to suppress all the redundant ARPs before the control plane processes them. ARP-OR finds its scalability for tree, fattree, and its variant diamond topology. It is prototyped on RYU controller and experiments were conducted on Mininet emulator. ARP-OR has convincingly reduced the ARP traffic compared to existing approaches.
 
Article
Distributed multi-controller deployment is a reliable method of achieving the scalability in Software-Defined Networking (SDN). With the explosive growth of network traffic, distributed SDN network is facing the problem of load imbalance among multiple controllers due to the dynamic change of network traffic. However, this problem is solved by switch migration considering con-troller load and delay, which will lead to high migration cost and low migration efficiency. Therefore, in this paper, a Fuzzy Satisfaction-based Switch Migration (FSSM) strategy is proposed to load balancing of distributed controllers. First, to monitor the controller load, the balancing judgment matrix and switch selection degree is introduced to select emigration domain and migrating switches. Second, migration cost and load balancing rate are considered as the main factors of load balancing. They are transformed into a migration competition model, which is used to compete for optimization rights. Third, the model is quickly solved by using the improved ant colony algorithm to select immigration domain. Finally, simulation results show that compared with existing migration strategies, FSSM strategy not only ensures the load balancing rate, but also reduces the migration cost by about 26.8% and the average response time of controllers by about 0.33 s when the network changes dynamically.
 
Article
End-to-End (E2E) services in beyond 5G (B5G) networks are expected to be built upon resources and services distributed in multi-domain, multi-technology environments. In such scenarios, key challenges around multi-domain management and collaboration need to be tackled. ETSI Zero-touch network and Service Management (ZSM) architectural framework provides the structure and methods for effectively delivering E2E network services. ZSM pursues cross-domain automation with minimum human intervention through two main enablers: Closed Control Loop (CCL) and Artificial Intelligence (AI). In this work, we propose a multi-domain ZSM-based architecture aiming at B5G scenarios where several per-domain CCLs leverage Machine Learning (ML) methods to collaborate in E2E service management tasks. We instantiate the architecture in the use case scenario of multi-domain automated healing of Dynamic Adaptive Streaming over HTTP (DASH) video services. We present two ML-assisted techniques, first to estimate a Service Level Agreement (SLA) violation through a Edge-based Quality of Experience (QoE) Probe, and second to identify the root cause at the core transport network. Results from the experimental evaluation in an emulation environment using real mobile network traces point to the potential benefits of applying ML techniques for QoS-to-QoE estimation at Multi-Access Edge Computing facilities and correlation to faulty transport network links. Altogether, the work contributes towards a vision of ML-based sandbox environments in the spirit of E2E service and network digital twins towards the realization of automated, multi-domain CCLs for B5G.
 
Article
The smart manufacturing industry (Industry 4.0) uses the Internet of Things (IoT) devices referred to as Industrial IoT (IIoT) to automate the industrial environment. These IIoT devices generate a massive amount of data called big data. Using fog computing architecture for processing this extensive data will reduce the service time and the service cost for the IIoT applications. The primary challenge is to design better service placement strategies to deploy the IIoT service requests on the fog nodes to minimize service costs and ensure the Quality of Service (QoS) of IIoT applications. Hence, the placement of IIoT services on the fog nodes can be considered as NP-hard problem. In this work, the meta-heuristic-based hybrid algorithms, namely: MGAPSO and EGAPSO, are developed by combining the GA & PSO and Elitism-based GA (EGA) & PSO, respectively. Further, carried out experiments on the two-level fog computing framework developed using docker and containers on 1.4 GHz, 64-bit quad-core processor devices. Experimental results demonstrate that the proposed hybrid EGAPSO algorithm minimizes service time, service cost, and energy consumption and ensures the IIoT applications’ QoS compared to other proposed and state-of-the-art service placement strategies considered for the performance evaluation.
 
Article
A bidirectional long short-term memory (Bi-LSTM) algorithm is proposed to resolve the problem of energy-efficient virtual network embedding. In the VNE process, a large number of attributes can provide information for efficient embedding. This paper divides them into three categories: “network characteristics”, “embedding sequence” and “task type”, and comprehensively analyzes their influence on the embedded performance of virtual networks. This study uses a graph convolutional network (GCN) to extract the network characteristics of virtual and substrate networks. By this approach, we embedded the network-topology graph containing nodes, links, and topological associations onto the input set of the GCN, which rapidly extracts network features. We then used the network features as the model input for the Bi-LSTM neural network to integrate historical and future embedding sequences into the training model. In this process, in conjunction with meta-reinforcement learning to accumulate the experience of various virtual-network tasks, we systematically adjusted model parameters and thus achieved the automatic tuning of neural networks. Simulation results show that when compared to similar existing algorithms. The proposed algorithm improves the acceptance ratio, average ratio of revenue and cost, and reduces the network energy consumption after integrating the network characteristics, embedding sequence, and task type.
 
The general procedure of feature extraction in botnet detection
The process of the proposed botnet detection approach
The proposed model accuracy compared for the four signatures
ROC curve for a TCP, b UDP, and c all flow in test data
Comparing detection accuracy of feature-based classification with the proposed approach
Article
Botnets are considered to be one of the most serious cybersecurity threats in recent years. While botnets have been widely studied, they are constantly evolving, becoming more sophisticated and robust against detection systems. Current approaches of botnet detection commonly use manual feature engineering or analyze packet contents violating the privacy of users. Although some studies use raw packet bytes for botnet detection, this approach is rarely reported for the comprehensive ISCX botnet dataset. In this paper, we propose a deep learning-based network traffic analyzer for botnet detection, which automatically extracts the convenient features from raw packet data. The raw data is extracted only from the headers of the first few packets in a flow. The proposed approach lifts the costs of manual feature engineering, preserves user privacy, and offers early detection of malicious traffic. We further enrich the raw data with temporal information of packets and field correlations, to construct four different flows signatures. The evaluations are performed with the ISCX botnet dataset, which contains new botnet types in its test data. We show the effectiveness of botnet detection based on raw data, by comparing the performance of the proposed approach against several feature-based methods. The evaluation results also show that the proposed approach outperforms several state-of-the-art studies based on the same dataset, and provides high accuracy of 97.13% in classifying network traffic.
 
Article
The evaluation of the impact of using Machine Learning in the management of softwarized networks is considered in multiple research works. In this paper, we propose to evaluate the robustness of online learning for optimal network slice placement. A major assumption in this study is to consider that slice request arrivals are non-stationary. We precisely simulate unpredictable network load variations and compare two Deep Reinforcement Learning (DRL) algorithms: a pure DRL-based algorithm and a heuristically controlled DRL as a hybrid DRL-heuristic algorithm, in order to assess the impact of these unpredictable changes of traffic load on the algorithms performance. We conduct extensive simulations of a large-scale operator infrastructure. The evaluation results show that the proposed hybrid DRL-heuristic approach is more robust and reliable than pure DRL in real network scenarios.
 
Article
Named data networking is one of the proposed architectures for the future Internet. In this architecture, names play an essential role. Packets in named data networking have names that are used instead of IP addresses, and based on these names, packets are forwarded through the network routers. For this purpose, named data network routers have three data structures Content Store (CS), Pending Interest Table (PIT), and Forwarding Information Base (FIB). In named data networking, the PIT table plays an important role. In this table, the information of all Interest packets waiting for Data packets is stored. The PIT should be able to search, delete, update information quickly, and in turn, take up little memory space. In this paper, a new variant of the Cuckoo filter to improve the performance of the PIT table called Two-dimensional Neighbor-based Cuckoo Filter (2DNCF) is proposed. The proposed 2DNCF uses the physical neighbor of the selected bucket by the Cuckoo filter that increases the utilization of the neighbor buckets, as well as the performance of the proposed 2DNCF filter. In this data structure, which is essentially a two-dimensional Cuckoo filter, an attempt has been made to use the second hashing function less than the first one in the Cuckoo filter. Due to less use of the second hashing function in this filter, it is more efficient in inserting, deleting, and searching than the standard Cuckoo filter. The simulation results show that this filter has a lower false-positive rate than the standard Cuckoo filter. Accordingly, it improves the insertion, deletion, and lookup performance of the PIT table compared to the other solutions.
 
Article
Virtual network function (VNF) is one of the pillars of a Cloud network that separates network functions and their dedicated hardware devices, such as routers, firewalls, and load balancers, to host their services on virtual machines. The VNF is responsible for network services that run on virtual machines and can connect each of them alone or organize themselves into a single enclosure to use all the resources available in that enclosure. This flexibility allows physical and virtual resources to be used in a way that ensures control over power consumption, balance in resource use, and minimizing costs and latency. In order to consolidate VNF groups into a minimum number of virtual machine (VM) with estimation of the association relation to a measure of confidence under the context of possibility theory, we propose a new Fuzzy-FCA approach for VNF placement based on formal concept analysis (FCA) and fuzzy logic in mixed environment based on cloud data centers and Multiple access edge computing (MEC) architecture. Thus, the inclusion of this architecture in the cloud environment ensures the distribution of compute resources to the end user in order to reduce end-to-end latency. To confirm the effectiveness of our solution, we compared it to one of the best algorithms studied in the literature, namely the MultiSwarm algorithm. The results of the series of experiments carried out show the feasibility and efficiency of our algorithm. Indeed, the harvested results confirm the capability of maximizing and balancing the use of resources, of minimizing the latency and the cost of energy consumption. The performance of our solution in terms of average latency represents 16%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$16\%$$\end{document}, a slight increase compared to MultiSwarm, and an average gain, of 49%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$49 \%$$\end{document} in run time, compared to the same algorithm.
 
Article
Forthcoming wireless generations, namely the fifth generation and beyond, are experiencing various roll-out, planning, and implementation issues due to spectrum insufficiency. This spectrum shortage arises due to the growing number of wireless subscribers, significant traffic demands, inefficient spectrum distribution, and coexistence problems. The recognition of a free spectrum for wireless communication services is a critical requirement. So, the free spectrum can be predicted and modelled by using the spectrum sensing functionality of cognitive radio in the potential sub-THz band (0.1–1 THz) for beyond fifth-generation networks. Owing to the excellent prediction and classifying capabilities of deep learning, this research applies deep learning for spectrum sensing. The spectrum sensing data is a time-series sequence of binary 1(busy slots) and binary 0(free slots). To achieve this, a novel Long Boosted Memory Algorithm (LBMA) has been proposed here. Long Short-Term Memory (LSTM) are weak predictors unable to model long-term dependencies like a future prediction of primary user presence based on past time stamps and prone to overfitting. So, multiple weak LSTM predictors have been integrated to form a strong predictor not prone to overfitting using the AdaBoost technique for estimating robust spectrum predictions. LBMA uses input vectors like RSSI, the distance between cognitive radio user and gateways, and energy vectors to train the model. LBMA has been compared and evaluated with the existing deep learning methods based on metrics like Training time, Accuracy, sensitivity, specificity, detection probability, cross-validation and Time Complexity under different SNR scenarios (0 to 20 dB). The simulated results indicate that the proposed LBMA has outperformed the existing algorithms with an accuracy of 99.3, a sensitivity of 93.1, specificity of 92.9, sensing time of 1.7599 s with the lowest time complexity, and a training time of 56 s.
 
Article
The development and expansion of the Internet and cyberspace have increased computer systems attacks; therefore, Intrusion Detection Systems (IDSs) are needed more than ever. Machine learning algorithms have recently been used as successful IDSs; however, due to the high dimensions in IDSs, Feature Selection (FS) plays an essential role in these systems' performance. In this paper, a binary version of the Farmland Fertility Algorithm (FFA) called BFFA is presented to FS in the classification of IDSs. In the proposed method, the V-shaped function is used to move the FFA processes in the binary space, as a result of which the V-shaped function changes the continuous position of the solutions in the FFA algorithm to binary mode. A hybrid approach to classifiers and the BFFA is presented as a fast and robust IDS. The proposed method is tested on two valid IDSs datasets, namely NSL-KDD and UNSW-NB15, and is compared in Accuracy, Precision, Recall, and F1_ Score criteria with K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Adaboost (ADA_BOOST), and Naive Bayes (NB) classifiers. The simulation results showed that the proposed method performed better than the classifiers in Accuracy, Precision, and Recall criteria; moreover , the proposed method has a better run time in the FS operation.
 
Article
Despite the growing popularity of immersive video applications during the last few years, the stringent low latency requirements of this kind of services remain a major challenge for the existing network infrastructure. Edge-assisted solutions compensate for network latency by relying on cache-enabled edge servers to bring frequently accessed video content closer to the client. However, these approaches often require historical request traces from previous watching sessions or adopt passive caching strategies subject to the cold-start problem and prone to playout freezes. This paper introduces Explora-VR, a novel edge-assisted content prefetching method for tile-based 360 degrees video streaming. This method leverages the client's rate adaptation heuristic to preemptively retrieve the content that the viewer will most likely watch in the upcoming segments, and loads it into a nearby edge server. At the same time, Explora-VR incrementally builds a dynamic collective buffer for serving the requests from active streaming sessions based on the estimated popularity of video tiles per segment. An evaluation of the proposed method was conducted on head movement traces collected from 48 unique users while watching three different 360 degrees videos. Results show that Explora-VR is able to serve over 98% of the client requests from the cache-enabled edge server, leading to an average increase of 2.5x and 1.4x in the client's perceived throughput, compared to a conventional client-server setup and a least recently used caching policy, respectively. This enables Explora-VR to serve higher quality video content while providing a freeze-free playback experience and effectively reducing network traffic to the content server.
 
Article
In macrocell to small cell switching, the handover decision is deferred until A3 event occurs, i.e., the reference signal received power (RSRP) from serving macro cell falls below a predefined threshold. Since, RSRP can not measure the user perceived throughput precisely, the data rate perceived from the macrocell may fall below the requested data rate prior to handover. We refer to this period as black‘out period. In ultra dense network (UDN) scenario, mobile terminals (MTs) usually prefer to remain connected to the macrocells as the transmitting powers of macrocells are much higher than small cells. As a result, the MTs cannot utilize the offloading capacity of the small cells in hotspot areas predominantly. This makes the problem of blackout period more severe. The existing handover mechanisms cannot deal with the blackout period adequately. In this work, we propose a handover decision metric namely service goodness which is computed based on precise throughput estimation. Such throughput estimation considers medium access control (MAC) scheduling details as well as the fluctuation of interference level in UDN scenario. Simulation results confirm that our proposed service goodness based handover algorithm outperforms the existing handover mechanisms. Furthermore, to minimize the duration of blackout period, we also propose a soft handover technique (SHT) which combines the data rates received from multiple access networks using dual connectivity support. The superiority of SHT over semisoft handover has been established through Markov model based analysis and simulation.
 
Article
Telecommunications networks have penetrated into a range of industries. They are now a major infrastructure on both the domestic and global levels connecting countries together. Even when the age of Beyond-5G arrives, the nodes that make up core networks need to be multifunctional and stable, which means that multiple types of nodes, such as conventional nodes and new nodes, are likely to coexist. When a fault occurs in a network, it is necessary to determine the fault point and repair it quickly to recover the services provided. However, if multiple node types exist in a network, it is difficult to consider all of their characteristics because of the individual differences among nodes of different types. Thus, the determination of a method to restore the network relies on the experience of the maintenance staff, making quick recovery difficult as a result. In particular, alarms generated by nodes vary depending on the node type. Therefore, it is difficult to accurately determine the fault point. Even if the fault point and the cause of the fault are found, how the fault can be corrected also varies depending on the node type. Thus, it is difficult to select the right recovery action at the fault point because of each node’s “individuality.” Consequently, it takes a long time to restore the network. In this paper, we propose a method for quickly determining fault points in a network that use multiple node types, and also propose a method that can recommend the optimal recovery action. Specifically, we analyze the rule learning technologies which are used to determine fault points. A problem with conventional methods is that, if a fault occurs on a node type that is different from those node types for which rules have been learned, alarms may not be generated due to the “individuality” of each node. This can reduce the degree of accuracy in determining the fault point using learned rules, which in turn extends recovery time. To solve this problem, we propose and evaluate rule learning methods that use similarity between rules in terms of the alarms generated by a fault. To determine methods for recommending the optimal recovery action, we analyze technologies for finding, from among several possible recovery actions, the one that is most likely to recover the fault. A problem with conventional technology (recovery action recommendation) is that, when the number of node types used in the network increases, they recommend an action based simply on the track records of all nodes, but this recommended action may be wrong due to the “individuality” of each node. This can reduce the degree of accuracy of recommended actions, which in turn extends recovery time. To solve this problem, we propose and evaluate methods for selecting optimal recovery actions from an aggregation viewpoint of the track records of multiple nodes. These methods reduce the fault point determination process by an average of 21.5%, and the analysis recovery action process by an average of 17.2% compared with conventional methods.
 
Article
The management of IoT solutions is a complex task due to their inherent distribution and heterogeneity. IoT management approaches focus on devices and connectivity, thus lacking a comprehensive understanding of the different software, hardware, and communication components that comprise an IoT-based solution. This paper proposes a novel four-layer IoT Management Architecture (IoTManA) that encompasses various aspects of a distributed infrastructure for managing, controlling, and monitoring software, hardware, and communication components, as well as dataflows and data quality. Our architecture provides a cross-layer graph-based view of the end-to-end path between devices and the cloud. IoTManA has been implemented in a set of software components named IoT management system (IoTManS) and tested in two scenarios—Smart Agriculture and Smart Cities—showing that it can significantly contribute to harnessing the complexity of managing IoT solutions. The cross-layer graph-based modeling of IoTManA facilitates the implemented management system (IoTManS) to detect and identify root causes of typically distributed failures occurring in IoT solutions. We conducted a performance analysis of IoTManS focusing on two aspects—failure detection time and scalability—to demonstrate application scenarios and capabilities. The results show that IoTManS can detect and identify the root cause of failures in 806ms to 90,036ms depending on its operation mode, adapting to different IoT needs. Also, the IoTManS scalability is directly proportional to the scalability of the underlying IoT Platform, managing up to 5,000 components simultaneously.
 
Article
Cloud-Network Slice (CNS) is defined as an end-to-end infrastructure composed by computing, networking, and storage resources and it is expected to be a key enabler for novel verticals such as Industry 4.0, IoT and Vehicular Networks. This paper presents the design, implementation and integration of the Architecture for Orchestration and Management of Cloud-Network Slices (CNS-AOM), a modular architecture to orchestrate and manage slice resources and services in CNSs. The CNS-AOM is designed and implemented considering three important characteristics: (i) the business model called Slice-as-a-Service (SlaaS); (ii) the multiple administrative and technological domains; and (iii) the slice elasticity, which means the capacity of dynamically growing and shrinking the slice resources to improve service performance. To prove the feasibility of our proposal, two Proofs of Concept (PoC) are implemented in real environments to validate the CNS-AOM. First, an end-to-end content distribution service (CDN) is deployed across three different cities in Brazil to emphasize the multiple domains. Second, we present an IoT service using a fully-featured commercial service platform called dojot, which is instantiated and orchestrated by the proposed architecture. The dojot slice is instantiated overseas in four cities across two countries. The evaluation for the CDN slice considers the appropriate metrics that should be monitored and the actual services that should be instantiated to meet the end-user’s requirements depending on its location. Moreover, in the dojot slice, elasticity operations (vertical and horizontal) are tested and evaluated along with the time taken to deploy the slice infrastructure and the service. The main contributions of this paper are: (i) the design, implementation and integration of the CNS-AOM; (ii) the orchestration control-loop of the slice resources; and (iii) the execution of real proof-of-concept scenarios that demonstrate the feasibility of the CNS-AOM to instantiate and orchestrate services across geographically-distanced cities.
 
Article
The growth of mobile data traffic has led to the use of dense and heterogeneous networks with small cells in 4G and 5G. To manage such networks, dynamic and automated solutions for operation and maintenance tasks are needed to reduce human errors, save on Operating expense (OPEX) and optimize network resources. Self Organizing Networks (SON) are a promising tool in achieving this goal, and one of the essential use cases is Physical Cell Identity (PCI) assignment. There are only 504 unique PCIs, which inevitably leads to PCI reuse in dense and heterogeneous networks. This can create PCI collisions and confusions, but also a range of modulo PCI issues. These PCI issues can lead to use cases where User Equipments (UEs) cannot properly identify a cell or cells cannot properly identify UEs, especially during handovers, which all leads to radio communication failure. Therefore, a proper PCI assignment is crucial for network performance. In this paper, we first conduct a study on the impact of different PCI issues on the performance of the network by doing experiments on real-life hardware. Based on the finding from the experiments we create two SON algorithms: a Weighted Automatic Neighbor Relations (ANR) and a PCI assignment algorithm ALPACA. The Weighted ANR creates neighbor relations based on measurements from the network and calculates weights for cells and neighbor relations. ALPACA uses these weights to assign PCI values to cells in a way that avoids PCI issues or at least minimizes their effects on the network. ALPACA works in phases to allow it to adapt to dynamic network topology changes and continuously optimize the network. We validate and evaluate our approach using a simulator package that we have developed. The results show that ALPACA can resolve all collisions and confusion for up to 1000 cells in a highly dense topology, as well as minimize the effects of inevitable modulo PCI issues.
 
Article
This paper investigates the joint scaling and placement problem of network services made up of virtual network functions (VNFs) that can be provided inside a cluster managing multiple points of presence (PoPs). Aiming at increasing the VNF service satisfaction rates and minimizing the deployment cost, we use both transport and cloud-aware VNF scaling as well as multi-attribute decision making (MADM) algorithms for VNF placement inside the cluster. The original joint scaling and placement problem is known to be NP-hard and hence the problem is solved by separating scaling and placement problems and solving them individually. The experiments are done using a dataset containing the information of a deployed digital-twin network service. These experiments show that considering transport and cloud parameters during scaling and placement algorithms perform more efficiently than the only cloud based or transport based scaling followed by placement algorithms. One of the MADM algorithms, Total Order Preference by Similarity to the Ideal Solution (TOPSIS), has shown to yield the lowest deployment cost and highest VNF request satisfaction rates compared to only transport or cloud scaling and other investigated MADM algorithms. Our simulation results indicate that considering both transport and cloud parameters in various availability scenarios of cloud and transport resources has significant potential to provide increased request satisfaction rates when VNF scaling and placement using the TOPSIS scheme is performed.
 
Article
In current era, the next generation networks like 5th generation (5G) and 6th generation (6G) networks requires high security, low latency with a high reliable standards and capacity. In these networks, reconfigurable wireless network slicing is considered as one of the key element for 5G and 6G networks. A reconfigurable slicing allows the operators to run various instances of the network using a single infrastructure for better quality of services (QoS). The QoS can be achieved by reconfiguring and optimizing these networks using Artificial intelligence and machine learning algorithms. To develop a smart decision-making mechanism for network management and restricting network slice failures, machine learning-enabled reconfigurable wireless network solutions are required. In this paper, we propose a hybrid deep learning model that consists of convolution neural network (CNN) and long short term memory (LSTM). The CNN performs resource allocation, network reconfiguration, and slice selection while the LSTM is used for statistical information (load balancing, error rate etc.) regarding network slices. The applicability of the proposed model is validated by using multiple unknown devices, slice failure, and overloading conditions. An overall accuracy of 95.17% is achieved by the proposed model that reflects its applicability.
 
Article
Avoiding problems such as packet loss in the transport network is crucial for mobile network providers to offer high-quality services reliably and without interruptions. In this paper, we propose and compare three different transmission strategies, namely Caching, Network Coding (NC) and Repetition enabled transmission in the User Plane (UP) of mobile backhaul for network operators to prevent such performance degradation. In the proposed NC-enabled transmission method, NC provides robustness to transport network failures such that no further retransmission is required by the User Equipment (UE) compared to conventional approaches where UE applications perform retransmissions. The proposed scheme requires only a minor modification to the packet structure of the UP protocol, which requires a small development effort and no new extensions to the current UE standard features. We also discuss their placement in the O-RAN protocol stack and in the core network, and propose a new architecture that can utilize caching, repetition and NC features in the mobile network architecture. Our simulation results show that an exact 1% packet loss ratio in the backhaul link results in an additional total transmission time of 59.44% compared to the normal GPRS Tunneling Protocol-User Plane (GTP-U) transmission. Applying NC at a rate of 1% and 2% reduces this value to 52.99% and 56.26%, respectively, which is also better than the total transmission time of some previously studied dynamic replication schemes while keeping the bandwidth utilization at low rates. On the cache side, a reduction in latency of about 20% can be achieved with a cache size of 100 MB. At the end of the paper, we summarize some of the benefits and limitations of using these three strategies in UP of mobile backhaul networks.
 
Article
The efficacy of anomaly detection is fundamentally limited by the descriptive power of the input events. Today’s anomaly detection systems are optimized for coarse-grained events of specific types such as system logs and API traces. An attack can evade detection by avoiding noticeable manifestations in the coarse-grained events. Intuitively, we may fix the loopholes by reducing the event granularity, but this brings up two obvious challenges. First, fine-grained events may not have the rich semantics needed for feature construction. Second, the anomaly detection algorithms may not scale for the volume of the fine-grained events. We propose the application profile extractor (APE) that utilizes compression-based sequential pattern mining to generate compact profiles from fine-grained program traces for anomaly detection algorithms. With minimal assumptions on the event semantics, the profile generation are compatible with a wide variety of program traces. In addition, the compact profiles scale anomaly detection algorithms for the high data rate of fine-grained program tracing. We also outline scenarios that justify the need for anomaly detection with fine-grained program tracing events.
 
Article
The Transport Layer Security (TLS) protocol is widely used for protecting end-to-end communications between network peers (applications or nodes). However, the administrators usually have to configure parameters (e.g., cryptography algorithms or authentication credentials) to establish TLS connections manually. However, this way of managing security connections becomes infeasible when the number of network peers is high. This paper proposes a TLS management framework that configures and manages TLS connections in a dynamic and autonomous manner. The solution is based on well-known standardized protocols and models that allow providing the necessary configuration parameters to establish a TLS connection between two network nodes. Nowadays, this is required in several application scenarios such as virtual private networks, virtualized network functions, or service function chains. Our framework is based on standard elements of the Software Defined Networking paradigm, widely adopted to provide flexibility to network management, such as for the scenarios aforementioned. The proposed framework has been implemented in a proof of concept to validate the suitability of the proposed solution to manage the dynamic configuration of TLS connections. The experimental results confirm that the implementation of this framework enables an operable and flexible procedure to manage TLS connections between network nodes in different scenarios.
 
Article
Peer-to-Peer (P2P) technology is a popular tool for sharing files and multimedia services on networks. While the technology has been serving a good purpose of facilitating sharing of large volumes of data on networks, in other aspects, it has also become a potential source through which attackers could ride on to launch various malicious attacks on the networks. In networks with limited bandwidth resources, uncontrolled P2P activities may also come with problems of congestion in such networks. As P2P continues to evolve on the internet in more complex forms, the need for dynamic mechanisms with the ability to learn the evolving P2P behavior will be essential for accurate monitoring and detection of the P2P traffic to minimize its effects on networks. Supervised machine learning classifiers have been used in recent times, as potential tools for monitoring and detection of the P2P traffic. Incidentally, the capabilities of such classifiers decline over time due to the changing dynamics of the P2P features, making it necessary for the classifiers to undergo continuous retraining in order to maintain their capability of providing effective detection of new P2P traffic features in real-time operations. This paper presents a hybrid machine-learning framework that combines the capabilities of self-organizing map (SOM) model with a multilayer perceptron (MLP) network to achieve real-time detection of P2P traffic in networks. The SOM model generates sets of clustered features contained in the traffic flows and organizes the features into P2P and non-P2P, which are used for training the MLP model for subsequent detection and control of the P2P traffic. The proposed P2P detection framework was tested using real traffic data from the University of Ghana campus network. The test results revealed an average detection rate of 99.89% of the observed instances of P2P traffic in the experimental data. The good detection rate from the detection framework suggests its capability to serve as a potential tool for dynamic monitoring, detection, and control of P2P traffic to manage bandwidth resources and isolation of undesirable P2P-driven traffic in networks.
 
Article
Software-defined networking (SDN) provides many benefits, including traffic programmability, agility, and network automation. However, budget constraints burdened with technical (e.g., scalability, fault tolerance, security issues) and, sometimes, business challenges (user acceptance and confidence of network operators) make providers indecisive for full SDN deployment. Therefore, incremental deployment of SDN functionality through the placement of a limited set of SDN devices among traditional devices represents a rational and efficient environment that can offer customers modern and more data-intensive services. A unique challenge is the flexible distribution of loads on the servers that serve these services in network environments. The research in this paper focuses on developing a new load balancing scheme utilizing a hybrid SDN environment built with a minimal set of SDN devices (controller and one switch). We propose a novel load balancing scheme to monitor current server load indicators and apply multi-parameter metrics for scheduling connections to balance the load on the servers as efficiently as possible. The base of the new load balancing scheme is continuous monitoring of server load indicators and implementations of multi-parameter metrics (CPU load, I/O Read, I/O Write, Link Upload, Link Download) for scheduling connections. The testing performed on servers aims to balance the server's load as efficiently as possible. The obtained results have shown that this mechanism achieves better results than existing load balancing schemes in traditional and SDN networks. Moreover, a proposed load balancing scheme can be used with various services and applied in any client-server environment.
 
Article
The rapid growth of current computer networks and their applications has made network traffic classification more important. The latest approach in this field is the use of deep learning. But the problem of deep learning is that it needs a lot of data for training. On the other hand, the lack of a sufficient amount of data for different types of network traffic has a negative effect on the accuracy of the traffic classification. In this regard, one of the appropriate solutions to address this challenge is the use of data fusion methods in decision level. Data fusion techniques make possible to achieve better results by combining classifiers. In this paper, a network traffic classification approach based on deep learning and data fusion techniques is presented. The proposed method can identify encrypted traffic and distinguish between VPN and non-VPN network traffic. In the proposed approach, first, a preprocessing on the dataset is carried out, then three deep learning networks, namely, Deep Belief Network (DBN), Convolution Neural Network (CNN), and Multi-layer Perceptron (MLP) to classify network traffic are employed. Finally, the results of all three classifiers using Bayesian decision fusion are combined. The experimental results on the ISCX VPN-nonVPN dataset show that the proposed method improves the classification accuracy and performs well on different network traffic types. The average accuracy of the proposed method is 97%.
 
Article
Today’s Internet is a collection of multi-domain networks where each domain is usually administrated and managed by a single network operator. Unfortunately, network operators share minimal information with each other and do not collaborate much to improve their routing decisions and the overall performance of the resulting large-scale mutli-domain network. Motivated by the need to solve this problem, in this paper, we look at this particular challenge and propose a novel collaborative multi-domain routing framework that is able to efficiently route the incoming flows through the different domains while ensuring their performance requirements in terms of delay and bandwidth and maximizing the overall network utilization. We hence propose an integer linear program to solve this problem and develop a greedy algorithm to cope with large-scale instances of the problem. Simulation results show that the proposed collaboration mechanism is able to significantly optimize network utilization and maximize the number of routed flows with guaranteed performance.
 
Article
Android has become the target of attackers because of its popularity. The detection of Android mobile malware has become increasingly important due to its significant threat. Supervised machine learning, which has been used to detect Android malware is far from perfect because it requires a significant amount of labeled data. Since labeled data is expensive and difficult to get while unlabeled data is abundant and cheap in this context, we resort to a semi-supervised learning technique, namely pseudo-label stacked auto-encoder (PLSAE), which involves training using a set of labeled and unlabeled instances. We use a hybrid approach of dynamic analysis and static analysis to craft feature vectors. We evaluate our proposed model on CICMalDroid2020, which includes 17,341 most recent samples of five different Android apps categories. After that, we compare the results with state-of-the-art techniques in terms of accuracy and efficiency. Experimental results show that our proposed framework outperforms other semi-supervised approaches and common machine learning algorithms.
 
Article
Distributed Denial of Service (DDoS) attacks represent a major concern in modern Software Defined Networking (SDN), as SDN controllers are sensitive points of failures in the whole SDN architecture. Recently, research on DDoS attacks detection in SDN has focused on investigation of how to leverage data plane programmability, enabled by P4 language, to detect attacks directly in network switches, with marginal involvement of SDN controllers. In order to effectively address cybersecurity management in SDN architectures, we investigate the potential of Artificial Intelligence and Machine Learning (ML) algorithms to perform automated DDoS Attacks Detection ( DAD ), specifically focusing on Transmission Control Protocol SYN flood attacks . We compare two different DAD architectures, called Standalone and Correlated DAD , where traffic features collection and attack detection are performed locally at network switches or in a single entity (e.g., in SDN controller), respectively. We combine the capability of ML and P4-enabled data planes to implement real-time DAD . Illustrative numerical results show that, for all tested ML algorithms, accuracy, precision, recall and F1-score are above 98% in most cases, and classification time is in the order of few hundreds of $$\upmu \text {s}$$ μ s in the worst case. Considering real-time DAD implementation, significant latency reduction is obtained when features are extracted at the data plane by using P4 language. Graphic Abstract
 
Top-cited authors
Mohammad Masdari
  • Islamic Azad University of Urmia
Marzie Jalali
Moazam Bidaki
  • Islamic Azad University, Neyshabur Branch
Alaa el-din mohamed Riad
  • Mansoura University
Filip De Turck
  • Ghent University