OBS Backbone supported network.

OBS Backbone supported network.

Source publication
Article
Full-text available
Current global data traffic is increasingly dominated by delay and loss intolerant IP traffic which generally displays a structural self-similarity. This has necessitated the introduction of optical burst switching (OBS) as a supporting optical backbone network switching technology. Due to the buffer-less nature of optical burst switched (OBS) netw...

Contexts in source publication

Context 1
... fundamental architecture of a communications net- work infrastructure supported by an OBS backbone is shown in Figure 1. It primarily comprises edge and core nodes interconnected via DWDM optical fibre links. ...
Context 2
... ever, lowering the frequency at which the segmented bursts are generated reduces the risk of congestion at the BCP processors. Figure 10, plots the segment loss probability for both HP and LP segments. This is compared with conventional OBS approaches in which all traffic types have the same priority. ...
Context 3
... higher traffic loads, both HP and LP blocking probabil- ities tend to increase more rapidly because of the increas- ing contentions as well as congestion on deflected links/routes. In Figure 11, the average end-to-end delay is plotted against traffic load. Three cases are compared, namely; (1) conventional OBS in which contention is resolved by way of using FDLs as well as deflection, (2) conventional OBS with no deflection and (3) the ECM approach. ...

Similar publications

Article
Full-text available
Aiming at the problem of network congestion and unbalanced load caused by a large amount of data capacity carried in elephant flow in the data center network, an elephant flow detection method based on SDN is proposed. This method adopts the autodetect upload (ADU) mechanism. ADU is divided into two parts: ADU-Client and ADU-Server, in which ADU-Cl...

Citations

... Nowadays, global mobile data traffic is increasingly dominated by delay-and loss-intolerant traffic streams, which means that traffic congestion in the core network is an inevitable occurrence. This can quickly lead to overall network performance degradation resulting from the moderate-to-high traffic levels [11]. The consequence of this is the catastrophe of heavy burst losses, as the whole network might degenerate into chaos. ...
Article
Full-text available
One of the major challenges facing the realization of cognitive radios (CRs) in future mobile and wireless communications is the issue of high energy consumption. Since future network infrastructure will host real-time services requiring immediate satisfaction, the issue of high energy consumption will hinder the full realization of CRs. This means that to offer the required quality of service (QoS) in an energy-efficient manner, resource management strategies need to allow for effective trade-offs between QoS provisioning and energy saving. To address this issue, this paper focuses on single base station (BS) management, where resource consumption efficiency is obtained by solving a dynamic resource allocation (RA) problem using bipartite matching. A deep learning (DL) predictive control scheme is used to predict the traffic load for better energy saving using a stacked auto-encoder (SAE). Considered here was a base station (BS) processor with both processor sharing (PS) and first-come-first-served (FCFS) sharing disciplines under quite general assumptions about the arrival and service processes. The workload arrivals are defined by a Markovian arrival process while the service is general. The possible impatience of customers is taken into account in terms of the required delays. In this way, the BS processor is treated as a hybrid switching system that chooses a better packet scheduling scheme between mean slowdown (MS) FCFS and MS PS. The simulation results presented in this paper indicate that the proposed predictive control scheme achieves better energy saving as the traffic load increases, and that the processing of workload using MS PS achieves substantially superior energy saving compared to MS FCFS.
Article
Full-text available
As a result of the new telecommunication ecosystem landscape, wireless communication has become an interdisciplinary field whose future is shaped by several interacting dimensions. These interacting dimensions, which form the cyber–physical convergence, closely link the technological perspective to its social, economic, and cognitive sciences counterparts. Beyond the current operational framework of the Internet of Things (IoT), network devices will be equipped with capabilities for learning, thinking, and understanding so that they can autonomously make decisions and take appropriate actions. Through this autonomous operation, wireless networking will be ushered into a paradigm that is primarily inspired by the efficient and effective use of (i) AI strategies, (ii) big data analytics, as well as (iii) cognition. This is the Cognitive Internet of People Processes Data and Things (CIoPPD&T), which can be defined in terms of the cyber–physical convergence. In this article, through the discussion of how the cyber–physical convergence and the interacting dynamics of the socio-technical ecosystem are enablers of digital twins (DTs), the network DT (NDT) is discussed in the context of 6G networks. Then, the design and realization of edge computing-based NDTs are discussed, which culminate with the vehicle-to-edge (V2E) use cases.
Article
Heterogeneous IoT-enabled networks generally accommodate both jitter tolerant and intolerant traffic. Optical Burst Switched (OBS) backbone networks handle the resultant volumes of such traffic by transmitting it in huge size chunks called bursts. Because of the lack of or limited buffering capabilities within the core network, burst contentions may frequently occur and thus affect overall supportable quality of service (QoS). Burst contention(s) in the core network is generally characterized by frequent burst losses as well as differential delays especially when traffic levels surge. Burst contention can be resolved in the core network by way of partial buffering using fiber delay lines (FDLs), wavelength conversion using wavelength converters (WCs) or deflection routing. In this paper, we assume that burst contention is resolved by way of deflecting contending bursts to other less congested paths even though this may lead to differential delays incurred by bursts as they traverse the network. This will contribute to undesirable jitter that may ultimately compromise overall QoS. Noting that jitter is mostly caused by deflection routing which itself is a result of poor wavelength and routing assigning, the paper proposes a controlled deflection routing (CDR) and wavelength assignment based scheme that allows the deflection of bursts to alternate paths only after controller buffer preset thresholds are surpassed. In this way, bursts (or burst fragments) intended for a common destination are always most likely to be routed on the same or least cost path end-to-end. We describe the scheme as well as compare its performance to other existing approaches. Overall, both analytical and simulation results show that the proposed scheme does lower both congestion (on deflection routes) as well as jitter, thus also improving throughput as well as avoiding congestion on deflection paths.