ArticlePDF Available

AI-Driven 5G Network Optimization: A Comprehensive Review of Resource Allocation, Traffic Management, and Dynamic Network Slicing

Authors:

Abstract and Figures

The rapid advancement of 5G networks, coupled with the increasing complexity of resource management, traffic handling, and dynamic service demands, has underscored the need for more intelligent network optimization techniques. This paper comprehensively reviews AI-driven methods applied to 5G network optimization, focusing on resource allocation, traffic management, and network slicing. Traditional models face limitations in adapting to the dynamic nature of modern telecommunications, and AI techniques-especially machine learning (ML) and deep reinforcement learning (DRL)-offer scalable, adaptive solutions. These approaches enable real-time optimization by learning from network conditions, predicting traffic patterns, and intelligently managing resources across virtual network slices. AI's integration into 5G networks enhances performance, reduces latency, and ensures efficient bandwidth utilization. It is indispensable for handling the demands of emerging applications such as IoT, autonomous systems, and augmented reality. This paper highlights key AI techniques, their application to 5G challenges, and their potential to drive future innovations in network management, laying the groundwork for autonomous network operations in 6G and beyond.
Content may be subject to copyright.
American Journal of Artificial Intelligence
2024, Vol. 8, No. 2, pp. 55-62
https://doi.org/10.11648/j.ajai.20240802.14
*Corresponding author:
Received: 24 October 2024; Accepted: 9 November 2024; Published: 28 November 2024
Copyright: © The Author(s), 2024. Published by Science Publishing Group. This is an Open Access article, distributed
under the terms of the Creative Commons Attribution 4.0 License (http://creativecommons.org/licenses/by/4.0/), which
permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited.
Research Article
AI-Driven 5G Network Optimization: A Comprehensive
Review of Resource Allocation, Traffic Management, and
Dynamic Network Slicing
Dileesh Chandra Bikkasani1, * , Malleswar Reddy Yerabolu2
1Department of Technology Management, University of Bridgeport, Bridgeport, USA
2Independent Researcher, North Carolina, USA
Abstract
The rapid advancement of 5G networks, coupled with the increasing complexity of resource management, traffic handling, and
dynamic service demands, underscores the necessity for more intelligent network optimization techniques. This paper
comprehensively reviews AI-driven methods applied to 5G network optimization, focusing on resource allocation, traffic
management, and network slicing. Traditional models face limitations in adapting to the dynamic nature of modern
telecommunications, while AI techniques—particularly machine learning (ML) and deep reinforcement learning (DRL)—offer
scalable and adaptive solutions. These approaches facilitate real-time optimization by learning from network conditions,
predicting traffic patterns, and managing resources intelligently across virtual network slices. The integration of AI into 5 G
networks enhances performance, reduces latency, and ensures efficient bandwidth utilization, which is essential for supporting
emerging applications such as the Internet of Things (IoT), autonomous systems, and augmented reality. Furthermore, this paper
highlights key AI techniques and their applications to 5G challenges, illustrating their potential to drive future innovations in
network management. By laying the groundwork for autonomous network operations in 6G and beyond, this research
emphasizes the transformative impact of AI on telecommunications infrastructure and its role in shaping the future of
connectivity.
Keywords
5G, Telecommunication, Wireless Communication, Artificial Intelligence, Network Performance
1. Introduction
With the evolution of wireless communication came sig-
nificant advancements in the telecommunications space, with
data demand increasing 1000-fold from 4G to 5G [1]. Each
new generation has addressed the shortcomings of its prede-
cessors, and the advent of the 5th generation of wireless
network (5G) technology, in particular, promises unprece-
dented data speeds, ultra-low latency, and multi-device con-
nectivity. The new 5G-NR (New Radio) standard is catego-
rized into three distinct service classes: Ultra-Reliable
Low-Latency Communications (URLLC), massive Ma-
chine-Type Communications (mMTC), and enhanced Mobile
Broadband (eMBB). URLLC aims to provide highly reliable
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
56
and low-latency connectivity; eMBB focuses on increasing
bandwidth for high-speed internet access; and mMTC sup-
ports many connected devices, enabling IoT on a massive
scale [2]. Optimizing 5G performance is crucial for emerging
applications such as autonomous vehicles, multimedia, aug-
mented and virtual realities (AR/VR), IoT, Ma-
chine-to-Machine (M2M) communication, and smart cities.
Built on technologies like millimeter-wave (mmWave) spec-
trum, massive multiple-input multiple-output (MIMO) sys-
tems, and network function virtualization (NFV) [3], 5G
promises to revolutionize many industries.
Figure 1. Components of NFV.
Figure 1 illustrates the components of Network Function
Virtualization (NFV), a key enabler for 5G. NFV decouples
network functions from proprietary hardware, allowing these
functions to run as software on standardized hardware. By
virtualizing network functionssuch as firewalls, load bal-
ancers, and gatewaysNFV supports dynamic and scalable
network management, making it easier to allocate resources
flexibly across different network slices and use cases. This
flexibility is critical in managing the growing demands of 5G
applications, where real-time adaptability and resource opti-
mization are paramount. The significance of NFV lies in its
ability to decouple hardware from software, allowing network
operators to deploy new services and scale existing ones more
efficiently. For example, in a 5G network, operators can al-
locate resources dynamically to different virtual network
functions (VNFs), optimizing for the specific needs of ap-
plications such as autonomous vehicles or telemedicine,
which demand high reliability and low latency. Figure 1
showcases the architectural elements of NFV, including the
virtualization layer, hardware resources, and the management
and orchestration functions that control resource allocation
and scaling. NFV plays a pivotal role in enabling network
slicing, a critical feature of 5G, which allows operators to
create virtual networks tailored to specific application re-
quirements.
However, the complexity and heterogeneity of 5G net-
works present several challenges, including quality of service
(QoS) provisioning, resource management, and network op-
timization. As 5G networks scale, traditional rule-based ap-
proaches to network management become inadequate. Effi-
cient resource allocation, traffic management, and dynamic
network slicing [4] are necessary to handle demanding use
cases without compromising speed or reliability. Additionally,
with the increase in mobile traffic flow, meeting customer
demands on time requires addressing the allocation of band-
width for heterogeneous cloud services [5].
Network resource allocation (RA) in 5G networks plays a
critical role in optimizing the efficient utilization of spectrum,
computing power, and energy to meet the demands of modern
wireless communication. Resource allocation is pivotal for
data-intensive applications, IoT devices, and emerging tech-
nologies like AV and AR. It ensures these technologies re-
ceive adequate network resources, enhancing overall per-
formance and QoS in dynamic and heterogeneous environ-
ments. Traditional resource allocation relies on channel state
information (CSI), which necessitates significant overhead
costs, thereby increasing the overall expense of the process [6].
Section 2 focuses on various AI techniques applied to re-
source management, highlighting their impact on network
slicing, energy efficiency, and overall quality of service (QoS).
By leveraging reinforcement learning (RL), optimization
methods, and machine learning (ML) models, these advanced
strategies address the dynamic and complex requirements of
5G networks, providing adaptive and intelligent solutions for
enhanced connectivity and sustainability.
Network slicing allows creating multiple virtual networks
on top of a shared physical infrastructure, each optimized for
specific service requirements. Network slices can be inde-
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
57
pendently configured to support diverse applications with
varying performance needs, such as low-latency communica-
tion, massive device connectivity, or high-throughput data
services. Using network slicing, 5G can provide tailored ex-
periences for different user types while maximizing the use of
network resources. This capability is crucial for emerging use
cases like smart factories, telemedicine, and autonomous
systems, where performance requirements can differ signifi-
cantly across applications. The rest of the paper's sections are
structured as follows: The first section covers the general
understanding of the topic and defines why AI is necessary;
the second contains the traditional AI-ML methods used in
current spaces; and the third section consists of the DL-based
techniques.
2. Data Science and AI Techniques for
Resource Allocation
AI has shown promising results in resource allocation
through continuous learning and adaptation to network
changes. Unlike traditional mathematical model-based para-
digms, Reinforcement Learning (RL) employs a data-driven
model to efficiently allocate resources by learning from the
network environment, thereby improving throughput and
latency [7]. However, achieving fully distributed resource
allocation is not feasible due to virtualization isolation con-
straints in each network slice [8].
In 4G LTE and LTE Advanced (LTE/A) networks,
IP-based packet switching primarily focuses on managing
varying user numbers in a given area. By employing
ML-based MIMO for channel prediction and resource allo-
cation, these technologies enhance CSI accuracy through data
compression and reduced processing delays while adapting to
channel rates despite imperfect CSI [9]. However, this tech-
nique proves inefficient due to the complexity and traffic load
of 5G networks. Traditional resource allocation using CSI
struggles with system overhead, which can reach up to 25% of
total system capacity, making it suboptimal for 5G Cloud
Radio Access Network (CRAN) applications. The conven-
tional method also falters with an increasing number of users
[10].
Traditional optimization techniques include using an ap-
proximation algorithm to connect end-users with Remote
Radio Heads (RRH). This algorithm estimates the number of
end-users linked to each RRH and establishes connections
between end-users, RRHs, and Baseband Units (BBU) [11].
Challenges such as millimeter-wave (mmWave) beamform-
ing and optimized time-delay pools using hybrid beamform-
ing for single and multiple-user scenarios in 5G CRAN net-
works have also been explored [12]. In another instance, a
random forest algorithm was proposed for resource allocation,
where a system scheduler validates outputs from a binary
classifier; although robust, further research and development
are necessary [13].
Many Deep Reinforcement Learning (DRL) techniques
have been applied for network slicing in 5G networks, al-
lowing dynamic resource allocation that enhances throughput
and latency by learning from the network environment.
DRL-based approaches can handle the complexity and over-
head issues of traditional centralized resource allocation
methods [14]. Balevi and Gitlin (2018) proposed a clustering
algorithm that maximizes throughput in 5G heterogeneous
networks by utilizing machine learning techniques to improve
network efficiency and adjust resource allocation based on
real-time network conditions [15]. A Graph Convolution
Neural Network (GCN) resource allocation scheme was ex-
plored, addressing the problem of managing resource alloca-
tions effectively. Optimization techniques such as heuristic
methods and Genetic Algorithms (GAs) and are being ex-
plored to solve resource allocation problems efficiently by
minimizing interference and maximizing spectral efficiency.
Genetic algorithms, for instance, utilize evolutionary princi-
ples like selection, crossover, and mutation to evolve solu-
tions toward optimal resource allocation configurations.
Heuristic methods like simulated annealing and particle
swarm optimization (PSO) are employed to further enhance
resource management. The integration of AI-driven algo-
rithms, such as RL and DRL, into 5G networks enables re-
al-time, adaptive resource allocation based on changing net-
work conditions and user demands, significantly improving
network performance and efficiency [16]. By utilizing
AI-driven optimization, 5G networks can achieve higher
efficiency and better manage the interplay between different
network elements, ensuring seamless connectivity and high
performance.
AI and machine learning techniques are revolutionizing
resource allocation in 5G networks. By shifting from tradi-
tional models to adaptive, data-driven approaches like RL and
DRL, these technologies can significantly enhance network
throughput, reduce latency, and efficiently manage system
overhead. As traditional methods struggle with rising com-
plexity and user demand, AI-driven optimization provides
dynamic solutions that adapt to real-time network conditions,
enabling more efficient and effective resource management in
the 5G era and beyond.
3. Data Science and AI in Traffic
Management
Integrating AI technologies in 5G network traffic man-
agement aims to achieve traffic volume prediction, enhance
real-time computational efficiency, and ensure network ro-
bustness and adaptability to fluctuating traffic patterns. Mo-
bile data traffic is anticipated to grow significantly, with 5G's
share increasing from 25 percent in 2023 to around 75 percent
by 2029 [17]. This growth and the complexity of 5G networks
necessitate advanced techniques for efficient traffic prediction,
crucial for optimizing resource allocation and ensuring net-
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
58
work reliability. Machine learning (ML) offers diverse ap-
plications in network traffic management, from predicting
traffic volumes and identifying security threats to optimizing
traffic engineering (TE). These capabilities enable proactive
network monitoring, enhanced security measures, and im-
proved traffic flow management, leading to efficient and
resilient network operations [18]. By combining time, loca-
tion, and frequency information, researchers have identified
five basic time-domain patterns in mobile traffic, corre-
sponding to different types of urban areas such as residential,
business, and transportation hubs [19].
Traditional models like ARIMA have been widely used for
seasonality because they can model temporal dependencies in
time series data. The ARIMA model combines autoregression
(AR), integration (I), and moving average (MA) components
to predict future values based on past observations [20].
Variations such as seasonal ARIMA (SARIMA) use spectrum
analysis to describe traffic patterns and predict parameter
estimation using maximum likelihood methods [21]. The
seasonal ARIMA (SARIMA) model algorithm describes a
procedure for fitting seasonality to traffic data, determining
seasonality periods through spectrum analysis, estimating
differencing parameters, and identifying model orders using
information criteria that can predict the parameter estimation
using maximum likelihood methods. Despite their effective-
ness, ARIMA and SARIMA often struggle with the
non-linear and complex traffic patterns characteristic of 5G
networks. Machine learning models, including Support Vec-
tor Machines (SVM) and Random Forests, have been im-
plemented to overcome these limitations. SVMs capture
non-linear relationships, particularly in their regression form
(SVR) [22]. The Random Forest algorithm constructs deci-
sion trees by training each one on randomly selected data
points and features, improving prediction accuracy and ro-
bustness [23]. This approach builds multiple decision trees
and merges them to improve prediction accuracy and ro-
bustness, effectively handling the heterogeneity of 5G net-
work traffic. Support Vector Machines focus on maximizing
the distance from the separating plane to the nearest data
points, known as support vectors, using dot products and
kernel functions. This approach enables faster training than
methods like Bagging and Random Forest, which require
using the entire dataset.
Deep learning (DL) has revolutionized many facets of
network traffic management, including traffic prediction,
estimation, and smart traffic routing. Models like Long
Short-Term Memory (LSTM) networks are particularly ef-
fective because they can capture and learn long-term de-
pendencies in sequential data, making them highly suitable
for predicting network traffic. DL also presents promising
alternatives for interference management, spectrum man-
agement, multi-path usage, link adaptation, multi-channel
access, and traffic congestion [24]. For instance, an AI
scheduler using a neural network with two fully connected
hidden layers can reduce collisions by 50% in a wireless
sensor network of five nodes [25]. Advanced techniques such
as transformer-based architectures leverage self-attention
mechanisms to efficiently process vast amounts of data [26].
DRL techniques have been proposed to schedule high-volume
flexible traffic (HVFT) in mobile networks. This model uses
deep deterministic policy gradient (DDPG) reinforcement
learning to learn a control policy for scheduling IoT and other
delay-tolerant traffic, aiming to maximize the amount of
HVFT traffic served while minimizing degradation to con-
ventional delay-sensitive traffic [27].
Several studies highlight innovative applications of DRL
and AI frameworks in this context. One study introduced a
DRL approach for decentralized cooperative localization
scheduling in vehicular networks [28]. An AI framework
using CNN and RNN enhanced throughput by approximately
36%, though it incurred high training time and memory usage
costs [29]. Another DRL model based on LSTM enabled
small base stations to dynamically access unlicensed spectrum
and optimize wireless channel selection [30]. A DRL ap-
proach for SDN routing optimization achieved configurations
comparable to traditional methods with minimal delays [31].
Work on routing and interference management, often reliant
on costly algorithms like WMMSE, has advanced by ap-
proximating these algorithms with finite-size neural networks,
demonstrating significant potential for improving Massive
MIMO systems.
In summary, integrating AI technologies into 5G network
traffic management offers significant advancements in mul-
tiple facets, such as traffic prediction, resource allocation, and
network management. Techniques such as ML and DL using
models like LSTM and advanced frameworks utilizing CNN,
RNN, and DRL address the complex and dynamic nature of
5G networks. AI-driven solutions improve network efficiency
and reliability by enhancing interference management, spec-
trum access, and routing capabilities and adapting to varying
traffic patterns and demands. These innovations highlight the
transformative potential of AI in achieving robust, adaptive,
and efficient 5G network operations, paving the way for fu-
ture research and development in this critical field.
4. Network Slicing in 5G: Data Science
and AI Approaches
Network slicing is one of 5G's most transformative features.
It enables the partitioning of a single physical network into
multiple virtual networks, each modified and adjusted to meet
specific service requirements. These network slices can be
dynamically created, modified, and terminated to optimize
resources for various applications, ranging from massive IoT
deployments to ultra-reliable low-latency communications
(URLLC). The challenge lies in managing the complexity of
creating and maintaining these slices in real time, a task where
Artificial Intelligence (AI) plays a crucial role.
AI technologies are increasingly being adopted in 5G to
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
59
automate dynamic network slicing. The traditional manual
approach to network management is insufficient for handling
the large-scale, highly heterogeneous environments enabled
by 5G. AI, particularly machine learning (ML), offers ad-
vanced capabilities in real-time decision-making, predictive
analytics, and adaptive control, which are critical for the ef-
ficient deployment and management of network slices.
AI models predict traffic patterns, analyze network condi-
tions, and dynamically adjust resource allocation to meet the
specific needs of each slice. This ensures that slices maintain
optimal performance, even under fluctuating traffic and var-
ying service demands. Reinforcement learning (RL) and deep
learning (DL) algorithms are frequently used to handle the
complex decision-making processes required for slice or-
chestration. These algorithms can autonomously learn from
network data, optimize resources, and balance loads between
slices without human intervention.
AI-driven resource allocation plays a critical role in the
success of network slicing. Each network slice may have
distinct bandwidth, latency, and reliability requirements,
making it necessary to allocate resources dynamically. AI can
help predict and pre-allocate resources based on historical
data and real-time network traffic patterns. For instance, ML
algorithms like neural networks can predict peak traffic times
for specific services, enabling proactive resource allocation to
avoid congestion.
Reinforcement learning, particularly in a multi-agent en-
vironment, is also becoming popular for resource allocation in
network slicing. Multi-agent reinforcement learning (MARL)
allows different network entities, such as base stations and
user equipment, to collaborate as independent agents to
maximize overall network performance. The result is more
efficient resource utilization, minimizing waste and ensuring
that each slice receives the appropriate resources to maintain
its service-level agreements (SLAs).
Traffic management in network slicing is another area
where AI excels. The diversity of services in a 5G network,
such as enhanced mobile broadband (eMBB), URLLC, and
massive IoT, demands intelligent traffic prioritization. AI
algorithms analyze traffic patterns in real time, enabling the
system to prioritize slices that require lower latency or higher
reliability automatically. This dynamic traffic management
helps ensure that critical services, like autonomous vehicles or
remote surgeries, get priority over less critical applications
like video streaming see Figure 1.
Figure 2. Network Slicing in 5G.
AI-powered traffic management can also mitigate conges-
tion and improve the overall quality of service (QoS) by re-
routing traffic through less congested paths or adjusting
bandwidth allocations. Predictive models, trained on histori-
cal traffic data, can forecast potential bottlenecks and allow
the network to take preemptive measures, ensuring smooth
operations even during peak usage periods.
One key advantage of integrating AI in network slicing is
self-optimization capability. AI can continuously monitor
network performance metrics such as latency, throughput, and
error rates across different slices. When deviations from ex-
pected performance are detected, AI systems can autono-
mously adjust configurations, redistribute resources, or even
alter the slice architecture to restore optimal performance.
For instance, in cases where a slice serving IoT applications
experiences a sudden increase in device connections, AI can
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
60
scale the slice’s capacity by reallocating resources from less
critical slices. Similarly, slices that require ultra-low latency
can be dynamically reconfigured to prioritize routing through
lower-latency paths.
AI-driven approaches are fundamental in overcoming the
complexity of network slicing in 5G networks. By leveraging
AI technologies like reinforcement learning, neural networks,
and multi-agent systems, 5G networks can achieve greater
efficiency, adaptability, and scalability. AI ensures that net-
work slices are dynamically created, maintained, and opti-
mized, providing tailored services to meet the varying de-
mands of modern digital ecosystems.
5. Challenges and Future Directions
Integrating AI in 5G networks for resource allocation,
traffic management, and network slicing presents significant
potential and numerous challenges. One major challenge is
the complexity of managing increasingly dense and hetero-
geneous networks. As 5G supports various applications with
differing requirements, like eMBB, URLLC, and massive IoT,
the need for real-time optimization of resources becomes
critical. Traditional rule-based systems fail to manage dy-
namic traffic and user demands efficiently, necessitating
AI-driven adaptive solutions. However, deploying AI models
for real-time decision-making at scale requires significant
computational power and efficient learning algorithms to
avoid system delays and bottlenecks. A key issue is the
overhead and latency of AI-based resource allocation, mainly
when using deep reinforcement learning (DRL) models. DRL
systems effectively learn from the network environment and
make dynamic resource adjustments but often suffer from
high training costs and memory consumption. This can lead to
inefficiencies in real-time operations, especially when net-
works are large and involve many interconnected devices,
such as in smart cities or autonomous vehicle networks.
Moreover, multi-agent reinforcement learning (MARL)
methods used in network slicing require extensive coordina-
tion between network entities, which can result in system
overhead and resource wastage if not correctly managed.
Another challenge is the reliance on accurate channel state
information (CSI) for resource allocation. This practice incurs
considerable system overhead and is particularly inefficient in
CRAN and mmWave-based 5G applications. Existing solu-
tions like heuristic algorithms, genetic algorithms, or clus-
tering techniques provide partial improvements but often fail
to scale effectively as user demand increases. Future direc-
tions involve improving the efficiency and scalability of
AI-based solutions in 5G. Research is needed to optimize
learning algorithms to reduce training costs and memory
usage, potentially through federated learning or edge compu-
ting, where processing is distributed closer to the network
edge. Additionally, hybrid AI models combining multiple
machine learning techniques like convolutional neural net-
works (CNNs) for traffic prediction and reinforcement
learning for resource allocation, could offer more adaptable
solutions to 5G’s heterogeneous environments.
Network slicing in 5G also requires more sophisticated
AI-driven orchestration mechanisms. Real-time prediction
and adaptation of network slices based on AI algorithms will
become crucial, particularly in managing different services’
varying latency, reliability, and bandwidth requirements.
Integrating AI models with software-defined networking
(SDN) and network function virtualization (NFV) can help
optimize slice management dynamically.
6. Conclusion
Integrating AI-driven techniques into 5G networks pre-
sents a transformative approach to overcoming the inherent
challenges of resource allocation, traffic management, and
network slicing. As 5G networks scale in complexity, tradi-
tional methods struggle to provide the real-time adaptability
required for dynamic, high-performance environments. AI
models, particularly those based on machine learning (ML)
and deep reinforcement learning (DRL), offer adaptive, da-
ta-driven solutions that can continuously learn from network
conditions to optimize performance, reduce latency, and
manage system overhead.
Resource allocation in 5G is especially critical given the
rise of data-intensive applications like autonomous vehicles,
augmented reality, and massive IoT deployments. AI-based
methods, such as DRL and genetic algorithms, provide scal-
able approaches to efficiently manage spectrum, compute
power, and energy resources. These intelligent methods ad-
dress the shortcomings of conventional models, such as
channel state information (CSI)-based allocation, by offering
lower overhead and better adaptability to fluctuating condi-
tions. By leveraging AI, 5G networks can dynamically allo-
cate resources to meet the needs of different applications,
from low-latency services to high-throughput data demands.
Traffic management is another area where AI significantly
enhances the operation of 5G networks. Through advanced
traffic prediction and real-time analysis, AI models such as
LSTM and transformer-based architectures offer sophisti-
cated tools to predict traffic patterns and optimize network
load distribution. These capabilities are crucial in managing
the expected exponential increase in mobile data traffic, en-
suring efficient bandwidth utilization, and maintaining net-
work robustness even under high demand. Furthermore,
network slicing, a cornerstone of 5G’s architecture, benefits
immensely from AI’s ability to orchestrate and optimize vir-
tual network slices in real time. AI techniques such as mul-
ti-agent reinforcement learning (MARL) enable more granu-
lar control over resource allocation across slices, ensuring
each slice meets its specific service-level agreements (SLAs)
while optimizing overall network efficiency.
AI’s integration into 5G is not just a complementary tech-
nology but a necessity to fully realize the potential of
next-generation networks. The shift from static, rule-based
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
61
systems to intelligent, adaptive algorithms marks a paradigm
shift that will define future telecommunications, enabling
more resilient, efficient, and scalable network operations that
support a wide array of emerging technologies. This conver-
gence of AI and 5G lays the foundation for autonomous
networks and opens new research directions to further en-
hance performance, efficiency, and scalability in the era of 6G
and beyond.
Abbreviations
Machine Learning
Deep Learning
Deep Reinforcement learning
New Radio
Ultra-Reliable Low-Latency Communications
Multi-agent Reinforcement Learning
High-volume Flexible Traffic
Deep Deterministic Policy Gradient
Machine-Type Communications
Massive Multiple-input Multiple-output
Systems
Channel state information
Author Contributions
Dileesh Chandra Bikkasani is the lead author, and Mal-
leswar Reddy Yerabolu is the co-author. The authors read and
approved the final manuscript.
Conflicts of Interest
The author declares no conflicts of interest.
References
[1] An, J., et al., Achieving sustainable ultra-dense heterogeneous
networks for 5G. IEEE Communications Magazine, 2017.
55(12): p. 84-90.
[2] ITU. Setting the Scene for 5G: Opportunities & Challenges.
2020 [cited 2024 07/13]; Available from:
https://www.itu.int/hub/2020/03/setting-the-scene-for-5g-opp
ortunities-challenges/
[3] Sakaguchi, K., et al., Where, when, and how mmWave is used in
5G and beyond. IEICE Transactions on Electronics, 2017.
100(10): p. 790-808.
[4] Foukas, X., et al., Network slicing in 5G: Survey and
challenges. IEEE communications magazine, 2017. 55(5): p.
94-100.
[5] Abadi, A., T. Rajabioun, and P. A. Ioannou, Traffic flow
prediction for road transportation networks with limited traffic
data. IEEE transactions on intelligent transportation systems,
2014. 16(2): p. 653-662.
[6] Imtiaz, S., et al. Random forests resource allocation for 5G
systems: Performance and robustness study. in 2018 IEEE
Wireless Communications and Networking Conference
Workshops (WCNCW). 2018. IEEE.
[7] Wang, T., S. Wang, and Z.-H. Zhou, Machine learning for 5G
and beyond: From model-based to data-driven mobile wireless
networks. China Communications, 2019. 16(1): p. 165-175.
[8] Baghani, M., S. Parsaeefard, and T. Le-Ngoc, Multi-objective
resource allocation in density-aware design of C-RAN in 5G.
IEEE Access, 2018. 6: p. 45177-45190.
[9] Shehzad, M. K., et al., ML-based massive MIMO channel
prediction: Does it work on real-world data? IEEE Wireless
Communications Letters, 2022. 11(4): p. 811-815.
[10] Chughtai, N. A., et al., Energy efficient resource allocation for
energy harvesting aided H-CRAN. IEEE Access, 2018. 6: p.
43990-44001.
[11] Zarin, N. and A. Agarwal, Hybrid radio resource management
for time-varying 5G heterogeneous wireless access network.
IEEE Transactions on Cognitive Communications and
Networking, 2021. 7(2): p. 594-608.
[12] Huang, H., et al., Optical true time delay pool based hybrid
beamformer enabling centralized beamforming control in
millimeter-wave C-RAN systems. Science China Information
Sciences, 2021. 64(9): p. 192304.
[13] Lin, X. and S. Wang. Efficient remote radio head switching
scheme in cloud radio access network: A load balancing
perspective. in IEEE INFOCOM 2017-IEEE Conference on
Computer Communications. 2017. IEEE.
[14] Gowri, S. and S. Vimalanand, QoS-Aware Resource Allocation
Scheme for Improved Transmission in 5G Networks with IOT.
SN Computer Science, 2024. 5(2): p. 234.
[15] Bouras, C. J., E. Michos, and I. Prokopiou. Applying Machine
Learning and Dynamic Resource Allocation Techniques in
Fifth Generation Networks. 2022. Cham: Springer
International Publishing.
[16] Li, R., et al., Intelligent 5G: When cellular networks meet
artificial intelligence. IEEE Wireless communications, 2017.
24(5): p. 175-183.
[17] Ericsson. 5G to account for around 75 percent of mobile data
traffic in 2029. [cited 2024 07/13]; Available from:
https://www.ericsson.com/en/reports-and-papers/mobility-rep
ort/dataforecasts/mobile-traffic-forecast
[18] Amaral, P., et al. Machine learning in software defined
networks: Data collection and traffic classification. in 2016
IEEE 24th International conference on network protocols
(ICNP). 2016. IEEE.
[19] Wang, H., et al. Understanding mobile traffic patterns of large
scale cellular towers in urban environment. in Proceedings of
the 2015 Internet Measurement Conference. 2015.
[20] Box, G. E., et al., Time series analysis: forecasting and control.
2015: John Wiley & Sons.
American Journal of Artificial Intelligence http://www.sciencepg.com/journal/ajai
62
[21] Shu, Y., et al., Wireless traffic modeling and prediction using
seasonal ARIMA models. IEICE transactions on
communications, 2005. 88(10): p. 3992-3999.
[22] Kumari, A., J. Chandra, and A. S. Sairam. Predictive flow
modeling in software defined network. in TENCON 2019-2019
IEEE Region 10 Conference (TENCON). 2019. IEEE.
[23] Moore, J. S., A fast majority vote algorithm. Automated
Reasoning: Essays in Honor of Woody Bledsoe, 1981: p.
105-108.
[24] Arjoune, Y. and S. Faruque. Artificial intelligence for 5g
wireless systems: Opportunities, challenges, and future
research direction. in 2020 10th annual computing and
communication workshop and conference (CCWC). 2020.
IEEE.
[25] Mennes, R., et al. A neural-network-based MF-TDMA MAC
scheduler for collaborative wireless networks. in 2018 IEEE
Wireless Communications and Networking Conference
(WCNC). 2018. IEEE.
[26] Vaswani, A., et al., Attention is all you need. Advances in
neural information processing systems, 2017. 30.
[27] Chinchali, S., et al. Cellular network traffic scheduling with
deep reinforcement learning. in Proceedings of the AAAI
Conference on Artificial Intelligence. 2018.
[28] Peng, B., et al., Decentralized scheduling for cooperative
localization with deep reinforcement learning. IEEE
Transactions on Vehicular Technology, 2019. 68(5): p.
4295-4305.
[29] Cao, G., et al., AIF: An artificial intelligence framework for
smart wireless network management. IEEE Communications
Letters, 2017. 22(2): p. 400-403.
[30] Challita, U., L. Dong, and W. Saad, Proactive resource
management for LTE in unlicensed spectrum: A deep learning
perspective. IEEE transactions on wireless communications,
2018. 17(7): p. 4674-4689.
[31] Stampa, G., et al., A deep-reinforcement learning approach for
software-defined networking routing optimization. arXiv
preprint arXiv:1709.07080, 2017.
... Telecom networks generate massive data volumes, making techniques like anomaly detection, time-series forecasting, and deep learning essential for identifying trends that would be unobservable through traditional monitoring. In telecom networks, predictive analytics is instrumental in fault prediction, proactive maintenance, and dynamic resource allocation [31]. For instance, predictive models can anticipate hardware failures by recognizing subtle shifts in device performance metrics, such as increasing packet loss or fluctuating latency. ...
Article
Full-text available
Purpose: As telecom networks evolve with the integration of 5G, 6G, and IoT technologies, their increasing complexity presents significant challenges to maintaining network stability. Traditional management methods are no longer sufficient to ensure the resiliency required in these dynamic environments. Materials and Methods: To address this, we explore the application of digital twin technology as a transformative solution for network operations. Digital twins enable real-time monitoring, predictive analytics, and scenario simulation by creating a dynamic, virtual representation of the telecom network. These capabilities allow for proactive identification and resolution of potential failures, enhancing predictive maintenance and supporting real-time decision-making during network anomalies. The digital twin continuously synchronizes with the live network through integration of data from diverse components, ensuring an up-to-date reflection of operational conditions. Findings: Our analysis identifies key technical and organizational challenges in implementing this approach namely, the complexity of data integration, the demand for scalable architectures, and the necessity for advanced AI-driven analytics to interpret high-volume, high-velocity data effectively. Addressing these challenges is critical to unlocking the full potential of digital twins in telecom settings. The findings suggest that digital twin technology holds substantial promise in improving network resiliency and operational efficiency. Unique Contribution to Theory, Practice and Policy: By enabling telecom operators to shift from reactive to predictive and adaptive network management, this approach offers a robust framework for future-proofing infrastructure in the face of rising complexity. The study contributes to operations research by highlighting a scalable, data-driven pathway to more resilient and reliable telecom networks through the integration of digital twins.
... Future solutions must rely on advanced heat transfer and electronic device cooling methods [10]. AI-based thermal management protocols [11,12] should be integrated with energy-efficiency strategies [13]. The degree to which we can retain the high performance level of electronic devices, along with reduced reliable power consumption, will define the overall efficiency of future 6G networks, and thus will shape their level of sustainability in the coming future. ...
Article
Full-text available
A literature review is presented on energy consumption and heat transfer in recent fifth-generation (5G) antennas in network base stations. The review emphasizes on the role of computational science in addressing emerging design challenges for the coming 6G technology, such as reducing energy consumption and enhancing equipment thermal management in more compact designs. It examines the contributions of (i) advanced modeling and simulation sciences, including antenna modeling and design, the use of (ii) computational fluid dynamics (CFD) and heat transfer, and (iii) the application of artificial intelligence (AI) in these settings. The scientific interactions and collaborations between these scientific multidisciplinary approaches are vital in the effort to develop innovative 6G thermal equipment designs. This is essential if we are to overcome the current scientific barriers and challenges faced by this evolving technology, where the rapid transition from 5G to 6G will shape the expanding fields of deploying smaller satellites into lower orbits in outer space.
Article
Full-text available
This comprehensive article explores the transformative impact of enterprise microservices architecture in the telecommunications industry through the implementation of the UltraVailable Network Service (UVNS) project. The article examines how microservices address challenges related to critical network scalability, availability, and operational efficiency. The article demonstrates the significant advantages of microservices over traditional monolithic architectures through a detailed analysis of automated provisioning, billing systems, and advanced monitoring capabilities. The implementation showcases innovations in dynamic resource allocation, high availability features, and data management systems supported by artificial intelligence and machine learning technologies. The results reveal substantial improvements in service delivery, operational costs, and system performance, establishing a new benchmark for telecommunications infrastructure management.
Article
Full-text available
Resource allocation in 5G network is briefly analyzed and a number of schemes are described in literature which consider the device level interference and bandwidth conditions. However, they suffer to meet the performance requirement in resource allocation which in turn produces poor QoS performance. With the consideration to maximize resource allocation performance, a QoS-aware resource allocation scheme (QRAS) model is presented in this article. Unlike other approaches, the model utilizes the base stations with Internet of Things (IoT) devices in part of routing as well as resource allocation to meet and increase the QoS performance. The model monitors a set of base stations, LTE, bandwidth of devices, and other radio devices in the network with IoT devices. Using the devices identified, the method collects a set of routes with higher bandwidth and poor traffic pattern. The method computes the QoS Maximization Support (QMS) according to different factors like interference, traffic, angle of antenna, trust of IoT nodes, and others toward all routes discovered. Based on the QMS value, the method selects a specific route and devices to allocate the resource to perform transmission.
Article
Full-text available
In this paper, a multi-objective resource allocation algorithm in a novel density aware designed of virtualized software defined cloud radio access network (C-RAN) is proposed. Here, we consider two-design mode RAN based on the average density of users; 1) Dense region mode where lots of low cost data remote radio heads (RRHs) without processing capability are controlled via one control base station (BS), 2) Low density mode where few number of sparse RRHs with processing capability for doing part of baseband processing functions are deployed. In the first mode, the challenge of front-haul capacity limitation which is more critical in dense region, is tackled via separating control plane and data plane. Besides, the fully centralized processing and management, and energy efficient use of infrastructure in low traffic time by turning off data RRHs are achieved. Due to the two types of RAN (one control BS and data RRHs) in the network, the structure of this mode is heterogeneous. In low density region, the transmission delay due to the large distance between sparse RRHs and cloud unit, is more critical. This practical issue is overcome by splitting baseband processing and resource management between RRHs and BBU cloud which leads to the hierarchical processing and management structure. This flexible designed structure is coming from software defined structure of fifth generation wireless network (5G). We referred to this density aware C-RAN structure as heterogeneous (high density)/hierarchical (low density) virtualized software defined cloud RAN (HVSD-CRAN). HVSD-CRAN encounters variety of trade offs in resource management objectives such as throughput and delay, vs power and cost, respectively. Consequently, we resort to multi-objective optimization theory to propose a resource allocation framework in HVSD-CRAN.
Article
Modern mobile networks are facing unprecedented growth in demand due to a new class of traffic from Internet of Things (IoT) devices such as smart wearables and autonomous cars. Future networks must schedule delay-tolerant software updates, data backup, and other transfers from IoT devices while maintaining strict service guarantees for conventional real-time applications such as voice-calling and video. This problem is extremely challenging because conventional traffic is highly dynamic across space and time, so its performance is significantly impacted if all IoT traffic is scheduled immediately when it originates. In this paper, we present a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic. Using 4 weeks of real network data from downtown Melbourne, Australia spanning diverse traffic patterns, we demonstrate that our RL scheduler can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic, and outpeforms heuristic schedulers by more than 2x. Our work is a valuable step towards designing autonomous, "self-driving" networks that learn to manage themselves from past data.
Chapter
According to Internet of Things (IoT) Analytics, soon, the online devices in IoT networks will range from 25 up to 50 billion. Thus, it is expected that IoT will require more effective and efficient analysis methods than ever before with the use of Machine Learning (ML) powered by Fifth Generation (5G) networks. In this paper, we incorporate the K-means algorithm inside a 5G network infrastructure to better associate devices with Base Stations (BSs). We use multiple datasets consisting of user distribution in our area of focus and propose a Dynamic Resource Allocation (DRA) technique to learn their movement and predict the optimal position, RB usage and optimize their resource allocation. Users can experience significantly higher data rates and extended coverage with minimized interference and in fact, the DRA mechanism can mitigate the need for small cell infrastructure and prove a cost-effective solution, due to the resources transferred within the network.
Article
Accurate channel state information (CSI) acquisition is hindered by CSI estimation errors, compression, feedback, and processing delays. We propose a machine learning (ML)-based massive multiple-input multiple-output (mMIMO) channel predictor (CP), which can work on the estimated channel and the compressed version of the estimated channel as well. While existing work has evaluated the performance of ML algorithms by only using the artificially generated channel realizations, this letter reports the results of the ML algorithm using the real-world channel realizations from a measurement campaign performed at Nokia Bell-Labs. The results corroborate the validity of the proposed ML-based CP.
Article
Effectively supporting millimeter-wave (mmWave) beamforming is still a major challenge in 5G cloud radio access network (5G C-RAN) systems with evolved common public radio interface-based (eCPRI-based) fronthaul. Herein, an optical true time delay pool based hybrid beamforming (OTTDP-HBF) scheme, enabling centralized beamforming control, is proposed for mmWave 5G C-RAN systems. The weight control of the OTTDP-HBF is physically implemented by a pre-designed optical wavelength matrix which is mapped from a series of optical carriers. After introducing optical true time delay, this optical wavelength matrix then maps to the defined OTTDP. In this scheme, all physical implementation and the computational processing of analog beamforming can be centrally deployed into a centralized unit or distributed unit (CU/DU). Each active antenna unit (AAU) therefore becomes very simple. For single-user and multi-user scenarios, the OTTDP-based hybrid precoders are formulated respectively. In the developed OTTDP-based multi-user hybrid precoder, the spectral efficiency is improved by making use of all RF chains. For a 9-element uniform planar array deployed at an AAU, a designed example of the OTTDP-HBF is presented, where spectral efficiency curves obtained via different precoding schemes for single-user and multi-user scenarios are compared and discussed respectively.
Article
In this paper, we explore radio resource management for a time-varying 5G heterogeneous wireless access network that includes multi-RATs such as 5G new radio (NR) and long-term evolution (LTE). To cope with the practical challenges of a centralized approach such as signalling overhead and computational complexity, we decomposed the process of radio resource management into three parts, 1) RAT selection, 2) optimal radio resource allocation, and 3) congestion control. RAT selection is performed by each user device with network assistance, whereas the problem of radio resource allocation and congestion control is formulated as a stochastic optimization problem. Maintaining network stability, the average throughput utility is maximized subject to admission control and resource allocation. By using Lyapunov optimization, this utility maximization problem is decomposed into two subproblems. Radio resource allocation policy implemented at the central controller node allocates resources at each time slot using the Lagrange dual method, whereas the process of congestion control is carried out at user end based on throughput adaptation according to its current channel conditions. The theoretical and simulation results evaluate the performance of our proposed approach under the assumption of network stability. Simulation results related to individual users throughput and queue length, and performance comparison of equal power and adaptive power allocation techniques, are presented to depict the effectiveness of our proposed scheme. Furthermore, our proposed RAT selection scheme performs better than the traditional centralized and distributive mechanisms.
Article
Cooperative localization is a promising solution to the vehicular high-accuracy localization problem. Despite its high potential, exhaustive measurement and information exchange between all adjacent vehicles is expensive and impractical for applications with limited resources. Greedy policies or handengineering heuristics may not be able to meet the requirement of complicated use cases. We formulate a scheduling problem to improve the localization accuracy (measured through the Cramer-Rao lower bound (CRLB)) of every vehicle up to a given threshold using the minimum number of measurements. The problem is cast as a partially observable Markov decision process (POMDP) and solved using decentralized scheduling algorithms with deep reinforcement learning (DRL), which allow vehicles to optimize the scheduling (i.e., the instants to execute measurement and information exchange with each adjacent vehicle) in a distributed manner without a central controlling unit. Simulation results show that the proposed algorithms have a significant advantage over random and greedy policies in terms of both required numbers of measurements to localize all nodes and achievable localization precision with limited numbers of measurements.