Conference Paper

HW/SW Development of Cloud-RAN in 3D Networks: Computational and Energy Resources for Splitting Options

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... While the 3GPP has currently designated option 2 for NTNs, the push towards a unified 3D network with diverse communication elements necessitates exploring lower layer split options [19], [20] to reduce complexity by not realizing full gNB on the non-terrestrial nodes. For NTN, such lower layer splits lead to a reduced complexity of the NTN nodes. ...
Preprint
Full-text available
The rapid growth of non-terrestrial communication necessitates its integration with existing terrestrial networks, as highlighted in 3GPP Releases 16 and 17. This paper analyses the concept of functional splits in 3D-Networks. To manage this complex structure effectively, the adoption of a Radio Access Network (RAN) architecture with Functional Split (FS) offers advantages in flexibility, scalability, and cost-efficiency. RAN achieves this by disaggregating functionalities into three separate units. Analogous to the terrestrial network approach, 3GPP is extending this concept to non-terrestrial platforms as well. This work presents a general analysis of the requested Fronthaul (FH) data rate on feeder link between a non-terrestrial platform and the ground-station. Each split option is a trade-of between FH data rate and the respected complexity. Since flying nodes face more limitations regarding power consumption and complexity on board in comparison to terrestrial ones, we are investigating the split options between lower and higher physical layer.
Conference Paper
Full-text available
Open radio access network (Open-RAN) is becoming a key component of cellular networks, and therefore optimizing its architecture is vital. The Open-RAN is a distributed architecture that lets the virtualized networking functions be split between Distributed Units (DU) and Centralized Units (CUs); as a result, there is a wide range of design options. We propose an optimization problem to choose the split points. The objective is to balance the load across CUs as well as midhaul links by considering delay requirements. The resulting formulation is an NP-hard problem that is solved with a novel heuristic algorithm. Performance evaluation shows that the gap between optimal and heuristic solutions does not exceed 2%. An in-depth analysis of different centralization levels shows that using multi-CUs could reduce the total bandwidth usage by up to 20%. Moreover, multipath routing can improve the result of load balancing between midhaul links while increasing bandwidth usage.
Article
Full-text available
Software-defined networking decouples control and data plane in softwarized networks. This allows for centralized management of the network, but complete centralization of the controller functions raises potential issues related to failure, latency, and scalability. Distributed controller deployment is adopted to optimize scalability and latency problems. However, existing controllers are monolithic, resulting in code inefficiency for distributed deployment. Some seminal ongoing efforts have been proposed with the idea of disaggregating the SDN controller architecture into an assembly of various subsystems, each of which can be responsible for a certain controller task. These subsystems are typically implemented as microservices and deployed as virtual network functions, in particular as Docker Containers. This enables flexible deployment of controller functions. However, these proposals (e.g., μ\mu μ ONOS) are still in their early stage of design and development, so that a full decomposition of the SDN controller is not been available yet. To fill that gap, this article derives some important design guidelines to decompose an SDN controller into a set of microservices. Next, it also proposes a microservices-based decomposed controller architecture, foreseeing communications issues between the controller sub-functions. These design and performance considerations are also proven via the implementation of the proposed architecture as a solution, called Micro-Services based SDN controller (MSN), based on the Ryu SDN controller. Moreover, MSN includes different network communication protocols, such as gRPC, WebSocket, and REST-API. Finally, we show experimental results that highlight the robustness and latency of the system on a networking testbed. Collected results prove the main pros and cons of each network communication protocol and an evaluation of our proposal in terms of system resilience, scalability and latency.
Article
Full-text available
Although many countries have started the initial phase of rolling out 5G, it is still in its infancy with researchers from both academia and industry facing the challenges of developing it to its full potential. With the support of artificial intelligence, development of digital transformation through the notion of a digital twin has been taking off in many industries such as smart manufacturing, oil and gas, construction, bio-engineering, and automotive. However, digital twins remain relatively new for 5G/6G networks, despite the obvious potential in helping develop and deploy the complex 5G environment. This article looks into these topics and discusses how digital twin could be a powerful tool to fulfill the potential of 5G networks and beyond.
Article
Full-text available
Mobile communication standards have been developed into a new era of B5G and 6G. In recent years, low earth orbit (LEO) satellites and space Internet have become hot topics. The integrated satellite and terrestrial systems have been widely discussed by industries and academics, and even are expected to be applied in those huge constellations in construction. This paper points out the trends of two stages towards system integration of the terrestrial mobile communication and the satellite communications: to be compatible with 5G, and to be integrated within 6G. Based on analysis of the challenges of both stages, key technologies are thereafter analyzed in detail, covering both air interface currently discussed in 3GPP for B5G and also novel network architecture and related transmission technologies toward future 6G.
Article
Full-text available
This article aims at introducing to the readers of the AES Magazine the recently constituted technical panel: “Glue Technologies for Space Systems.” A short overview of the technologies considered in the panel will be provided, along with panel vision and perspectives shared with the founder members. Some information about panel meetings and participation rules will conclude this article.
Article
Full-text available
Pacing the way towards 5G has lead researchers and industry in the direction of centralized processing known from Cloud-Radio Access Networks (C-RAN). In C-RAN research, a variety of different functional splits is presented by different names and focusing on different directions. The functional split determines how many Base Station (BS) functions to leave locally, close to the user, with the benefit of relaxing fronthaul network bitrate and delay requirements, and how many functions to centralize with the possibility of achieving greater processing benefits. This work presents for the first time a comprehensive overview systematizing the different work directions for both research and industry, while providing a detailed description of each functional split option and an assessment of the advantages and disadvantages. This work gives an overview of where the most effort has been directed in terms of functional splits, and where there is room for further studies. The standardization currently taking place is also considered and mapped into the research directions. It is investigated how the fronthaul network will be affected by the choice of functional split, both in terms of bitrates and latency, and as the different functional splits provide different advantages and disadvantages, the option of flexible functional splits is also looked into.
Technical Report
Full-text available
This report contains results of my numerous benchmarks, running on a Raspberry Pi 3B+, providing comparisons with the older model 3B and 32 bit versus 64 bit working. One major objective is to identify strengths and weaknesses, rather than an overall rating. Single core Benchmarks generally show 3B+ performance improvements proportional to the faster CPU MHz. Memory benchmarks show similar gains with data from caches but not from RAM. Multithreading programs provide performance gains almost proportional to the number of threads used, up to four, with another nine times slower. In between, degradation is caused where data has frequent updates and using random access. A number of benchmarks measure floating point MFLOPS speed, where the source code is often suitable for compilation to use advanced SIMD parallelism. Best 4 core scores were 11563 MFLOPS Single Precision and 4492 MFLOPS Double Precision, excellent on a cost/performance basis, but nowhere near the efficiency of such as an Intel Core i7 Processor. In order to identify the wide variations in MFLOPS, a section is provided showing assembly code produced by the different compilers. Other benchmarks measure I/O performance of main or USB drives and networks. The enhanced 3B+ LAN an WiFi speeds are demonstrated, also many variations and running complications. Other benchmarks cover graphics speeds, again with many test functions, using Java drawing and OpenGL GLUT. Multiple copies of single core floating point and integer stress tests were run, along with a program that measured CPU MHz, temperature and core voltage, some including OpenGL. They identify the new 3B+ thermal characteristic of MHz reducing from 1400 to 1200 MHz, at a lower voltage, at 70°C and thermal throttling kicking in on reaching 80°C, with MHz reducing below 1000 shown. Other peculiar situations were also identified. The first stress tests were carried out with the Pi 3B+ card in a plastic case and no CPU heatsink. The tests were repeated with the card installed in a FLIRC case, with its built in efficient heatsink. Here, two 15 minute tests occasionally hit the 70°C barrier, but not for long enough to impact performance by much. For all stress tests, room temperature was near 22°C.
Article
Full-text available
The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: RAN, core network, and caching. We also present a general overview of 5G cellular networks composed of software defined network (SDN), network function virtualization (NFV), caching, and mobile edge computing (MEC) capable of meeting latency and other 5G requirements.
Article
Full-text available
Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-user's needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on baseband resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This paper surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RAN.
Conference Paper
Today, 5G networks are being worldwide rolled out, with significant benefits in our economy and society. However, 5G systems alone are not expected to be sufficient for the challenges that 2030 networks will experience, including, e.g., always-on networks, 1 Tbps peak data rate, <10 cm positioning, etc. Thus, the definition of evolutions of the 5G systems and their (r)evolutions are already being addressed by the scientific and industrial communities, targeting 5G-Advanced (5G-A) and 6G. In this framework, Non-Terrestrial Networks (NTN) have successfully been integrated in 3GPP Rel. 17 and it is expected that they will play an even more pivotal role for 5G-A (up to Rel. 20) and 6G systems (beyond Rel. 20). In this paper, we explore the path that will lead to 5G-A and 6G NTN communications, providing a clear perspective in terms of system architecture, services, technologies, and standardisation roadmap.
Article
As space agencies are planning manned missions to reach Mars, researchers need to pave the way for supporting astronauts during their sojourn. This will also be achieved by providing broadband and low-latency connectivity through wireless network infrastructures. In such a framework, we propose a Martian deployment of a 3-Dimensional (3D) network acting as Cloud Radio Access Network (C- RAN). The scenario consists, of unmanned aerial vehicles (UAVs) and small satellite platforms. Thanks to the thin Martian atmosphere, CubeSats can stably orbit at very-low- altitude. This allows to meet strict delay requirements to split baseband processing functions between drones and CubeSats. The detailed analytical study, presented in this paper, confirmed the viability of the proposed 3D architecture, under some constraints and trade- off concerning the involved network infrastructures, that will be discussed in detail.
Article
Space Agencies from all over the world are planning human missions on the Red planet in the next 20–30 years. The landing of astronauts and their sojourn on the Martian surface will impose the presence in situ of some basic supporting infrastructures. Recent research by NASA highlighted the urgent necessity of providing efficient and readily-available wireless connection on the Martian surface, as well as long-haul connection to Earth. Thus, to fulfill the former, the objective of this paper is to study and assess a flexible 6G-based 3D network solution for mobile connectivity on Mars. 6G is going to provide guaranteed Quality-of-Service (QoS) for heterogeneous applications, while optimizing the use of the network resources. The Radio Access Network (RAN) of such an infrastructure will be fully softwarized together with the Next Generation Core (NGC). 6G 3D networks will employ drones, small satellites and orbiters due to the reduced distances between them, which can support low-latency wireless links. Moreover, rovers, landers and humans are regarded as end users of the system. An in-depth analysis regarding trade-offs to be accounted, when selecting a specific orbital altitude is detailed. Intensive end-to-end (E2E) emulation trials in OpenAirInterface (OAI) environment is carried out to test the feasibility of the whole network. The performance is evaluated by means of metrics such as E2E delay, packet loss and throughput. Finally, computational complexity and memory usage is estimated for each node of the proposed architecture.
Article
Research for 6th generation (6G) communication currently focuses on nonterrestrial networks (NTNs) to promote ubiquitous and ultrahigh-capacity global connectivity. Specifically, multilayered hierarchical networks, i.e., the orchestration among different aerial/space platforms, including low-altitude platforms (LAPs), high-altitude platforms (HAPs), and satellites cooperating at different altitudes, currently represent one of the most attractive technological options to solve coverage and latency constraints associated with the NTN paradigm. However, several issues still need to be resolved for proper network design. In this work, we evaluate the performance of different multilayered nonterrestrial configurations. We also provide guidelines on the optimal working point(s) for which it is possible to achieve a good compromise between improved system flexibility and network performance with respect to a baseline stand-alone deployment.
Conference Paper
Software-defined networking is a synonym for the term programmable network and is the Key for 5G and beyond networking paradigms. Software-defined network (SDN) provides network management and controlling features irrespective of the hardware configurations and network infrastructure. Network slicing is the process of separation of multiple virtual networks based on different functions or tasks such that one application does not interfere with another network. Network slicing allows separating the control plane from the user's plane. Various investigators have investigated how the slices that have been used for splitting paths can be measured to improve durability. In SDN, the dynamic migration of switches offers a method of offloading the load from one controller to another controller. We have reintroduced the concept of the network (dataflow) splitting for load balancing in SDN. In this paper, we compare the performance techniques and found that the splitting paradigm with dynamic migration offers the most balanced network flow and least overhead on the SDN controller.
Article
The advent of network softwarization is enabling multiple innovative solutions through software-defined networking (SDN) and network function virtualization (NFV). Specifically, network softwarization paves the way for autonomic and intelligent networking, which has gained popularity in the research community. Along with the arrival of 5G and beyond, which interconnects billions of devices, the complexity of network management is significantly increasing both investments and operational costs. Autonomic networking is the creation of self-organizing, self-managing, and self-protecting networks, to afford the network management complexes and heterogeneous networks. To achieve full network automation, various aspects of networking need to be addressed. So, this article proposes a novel architecture for the multi-agent-based network automation of the network management system (MANA-NMS). The architecture rely on network function atomization, which defines atomic decision-making units. Such units could represent virtual network functions. These atomic units are autonomous and adaptive. First, the article presents a theoretical discussion of the challenges arisen by automating the decision-making process. Next, the proposed multi-agent system is presented along with its mathematical modeling. Finally, MANA-NMS architecture is mathematically evaluated from functionality, reliability, latency, and resource consumption performance perspectives.
Article
With the 5th Generation (5G) Mobile network being rolled out gradually in 2019, the research for the next generation mobile network has been started and targeted for 2030. To pave the way for the development of the 6th Generation (6G) mobile network, the vision and requirements should be identified first for the potential key technology identification and comprehensive system design. This article first identifies the vision of the society development towards 2030 and the new application scenarios for mobile communication, and then the key performance requirements are derived from the service and application perspective. Taken into account the convergence of information technology, communication technology and big data technology, a logical mobile network architecture is proposed to resolve the lessons from 5G network design. To compromise among the cost, capability and flexibility of the network, the features of the 6G mobile network are proposed based on the latest progress and applications of the relevant fields, namely, on-demand fulfillment, lite network, soft network, native AI and native security. Ultimately, the intent of this article is to serve as a basis for stimulating more promising research on 6G.
Article
The 5th generation (5G) mobile networks has been put into services across a number of markets, which aims at providing subscribers with high bit rates, low latency, high capacity, many new services and vertical applications. Therefore the research and development on 6G have been put on the agenda. Regarding demands and characteristics of future 6G, artificial intelligence (A), big data (B) and cloud computing (C) will play indispensable roles in achieving the highest efficiency and the largest benefits. Interestingly, the initials of these three aspects remind us the significance of vitamin ABC to human body. In this article we specifically expound on the three elements of ABC and relationships in between. We analyze the basic characteristics of wireless big data (WBD) and the corresponding technical action in A and C, which are the high dimensional feature and spatial separation, the predictive ability, and the characteristics of knowledge. Based on the abilities of WBD, a new learning approach for wireless AI called knowledge + data-driven deep learning (KD-DL) method, and a layered computing architecture of mobile network integrating cloud/edge/terminal computing, is proposed, and their achievable efficiency is discussed. These progress will be conducive to the development of future 6G.
Article
5G networks are expected to become the main infrastructure for security verticals such as disaster relief, humanitarian aid, and governmental and defense communications. In the case of defense services, especially, there are complex areas where coverage and connectivity are not reliable or are completely absent. Thus, the deployment of drones as mobile base stations (BSs) also needs a system designed for reliable backhaul based on satellites. However, a baseband unit (BBU) [devoted to baseband signal processing at the radio access network (RAN)] on drones is not a flexible solution and may require great power supply and processing capabilities, which a drone can hardly host.
Article
The fifth generation (5G) wireless access technology, known as New Radio (NR), will address a variety of usage scenarios from enhanced mobile broadband to ultra-reliable low-latency communications to massive machine type communications. Key technology features include ultra-lean transmission, support for low latency, advanced antenna technologies, and spectrum flexibility including operation in high frequency bands and inter-working between high and low frequency bands. This article provides an overview of the essentials of the state of the art in 5G wireless technology represented by the 3GPP NR technical specifications, with a focus on the physical layer. We describe the fundamental concepts of 5G NR, explain in detail the design of physical channels and reference signals, and share the various design rationales influencing standardization.
Article
This article presents an architecture vision to address the challenges placed on 5G mobile networks. A two-layer architecture is proposed, consisting of a radio network and a network cloud, integrating various enablers such as small cells, massive MIMO, control/user plane split, NFV, and SDN. Three main concepts are integrated: ultra-dense small cell deployments on licensed and unlicensed spectrum, under control/user plane split architecture, to address capacity and data rate challenges; NFV and SDN to provide flexible network deployment and operation; and intelligent use of network data to facilitate optimal use of network resources for QoE provisioning and planning. An initial proof of concept evaluation is presented to demonstrate the potential of the proposal. Finally, other issues that must be addressed to realize a complete 5G architecture vision are discussed.
Cubesat design specification (1u - 12u) rev 14 cp-cds-r14
  • Cal Poly
Fronthaul size: Calculation of maximum distance between rrh and bbu
  • H J Son
  • S M Shin
Nvidia jetson nano is a $99 raspberry pi rival for ai development
  • A Lele
Cpu performance evaluation
  • Shaban Muhammad
Raspberry pi 4 vs raspberry pi 3b+
  • L Hattersley
Power consumption bench-marks
  • Raspberry Pi Dramble
Benchmarking the brand new nvidia jetson nano: 4gb, usb 3, $99!
  • R Graves
Make the most of your jetson's computing power for machine learning inference
  • K Yurkova
Raspberry pi 4 in short supply, being scalped at 400% markup (updated)
  • L Pounder