Conference PaperPDF Available

Artificial Intelligence in 5G Technology: A Survey

Conference Paper

Artificial Intelligence in 5G Technology: A Survey

Abstract

A fully operative and efficient 5G network cannot be complete without the inclusion of artificial intelligence (AI) routines. Existing 4G networks with all-IP (Internet Protocol) broadband connectivity are based on a reactive conception, leading to a poorly efficiency of the spectrum. AI and its sub-categories like machine learning and deep learning have been evolving as a discipline, to the point that nowadays this mechanism allows fifth-generation (5G) wireless networks to be predictive and proactive, which is essential in making the 5G vision conceivable. This paper is motivated by the vision of intelligent base stations making decisions by themselves, mobile devices creating dynamically-adaptable clusters based on learned data rather than pre-established and fixed rules, that will take us to a improve in the efficiency, latency, and reliability of the current and real-time network applications in general. An exploration of the potential of AI-based solution approaches in the context of 5G mobile and wireless communications technology is presented, evaluating the different challenges and open issues for future research.
978-1-5386-5041-7/18/$31.00 ©2018 IEEE 860 ICTC 2018
Artificial Intelligence in 5G Technology: A Survey
Manuel Eugenio Morocho Cayamcela*, Wansu Lim
Department of Electronic Engineering, Kumoh National Institute of Technology
Gumi, Gyeongsangbuk-do, 39177, South Korea
Email: *eugeniomorocho@kumoh.ac.kr, w.lim@kumoh.ac.kr
Abstract—A fully operative and efficient 5G network cannot be
complete without the inclusion of artificial intelligence (AI) routines.
Existing 4G networks with all-IP (Internet Protocol) broadband
connectivity are based on a reactive conception, leading to a poorly
efficiency of the spectrum. AI and its sub-categories like machine
learning and deep learning have been evolving as a discipline, to the
point that nowadays this mechanism allows fifth-generation (5G)
wireless networks to be predictive and proactive, which is essential
in making the 5G vision conceivable. This paper is motivated by the
vision of intelligent base stations making decisions by themselves,
mobile devices creating dynamically-adaptable clusters based on
learned data rather than pre-established and fixed rules, that will
take us to a improve in the efficiency, latency, and reliability of the
current and real-time network applications in general.
An
exploration of the potential of AI-based solution approaches in
the context of 5G mobile and wireless communications
technology is presented, evaluating the different challenges and
open issues for future research.
Index Terms5G Networks, Artificial Intelligence, IT
Convergence, Machine Learning, Deep Learning, Next Generation
Network.
I.
I
NTRODUCTION
Artificial Intelligence is great for problems for which existing
solutions require a lot of hand-tuning or long lists of rules, for
complex problems for which there is no good solution at all using
traditional approaches, for adaptation to fluctuating
environments, to get insights about complex problems that use
large amounts of data, and in general to notice the patterns that a
human can miss [1]. Hard-coded software can go from a long list
of complex rules that can be hard to maintain to a system that
automatically learn from previous data, detect anomalies, predict
future scenarios, etc. These problems can be tackled adopting the
capability of learn offered by AI along with the dense amount of
transmitted data or wireless configuration datasets.
We have witnessed AI, mobile and wireless systems becoming
an essential social infrastructure, mobilizing our daily life and
facilitating the digital economy in multiple shapes [2]. However,
somehow 5G wireless communications and AI have been
perceived as dissimilar research fields, despite the potential they
might have when they are fused together.
Certain applications available in this intersection of fields
have been addressed within specific topics of AI and next-
generation wireless communication systems. Li et al. [3],
highlighted the potentiality of AI as an enabler for cellular
networks to cope with the 5G standardization requirements.
Bogale et al. [4], discussed the Machine Learning (ML)
techniques in the context of fog (edge) computing architecture,
aiming to distribute computing power, storage, control and
networking functions closer to the users. Jiang et al. [5], focused
on the challenges of AI in assisting the radio communications in
intelligent adaptive learning, and decision-ma kin g.
The next generation of mobile and wireless communication
technologies also requires the use of optimization to minimize or
maximize certain objective functions. Many of the problems in
mobile and wireless communications are not linear or
polynomial, in consequence, they demand to be approximated.
Artificial neural networks (ANN) is an AI technique that has
been suggested to model the objective function of the non- linear
problem that requires optimization [6].
In this article, we will introduce the potential of AI from the
basic learning algorithms, ML, deep learning, etc., into the
next generation wireless networks, that help fulfilling the
coming diversified requirements of the 5G standards to operate
in a fully automated fashion, meeting the increased capacity
demand and to serve users with superior quality of experience
(QoE). The article is divided according to the level of supervision
the AI technique requires on the training stage. The major
categories boarded on the following sections are in supervised
learning, unsupervised learning, and reinforcement learning. To
understand the difference between these three learning
subcategories, a quintessential concept of learning can be
invoked: "A computer program is said to learn from experience
E with respect to some class of tasks T and performance measure
P, if its performance at tasks in T, as measured by P, improves
with experience E" [7].
Supervised Learning comprises looking at several examples of
a random vector x and its label value of vector y, then learning to
predict y from x, by estimating p(y | x), or particular properties of
that distribution. Unsupervised Learning implicates observing
different instances of a random vector x and aiming to learn the
probability distribution p(x), or its properties. Reinforcement
Learning interacts with the environment, getting feedback loops
between the learning system and its experiences, in terms of
rewards and penalties [8].
II.
S
UPERVISED
L
EARNING IN
5G M
OBILE AND
W
IRELESS
C
OMMUNICATIONS
T
ECHNOLOGY
In supervised learning, each training example has to be fed
along with their respective label. The notion is that training a
learning model on a sample of the problem instances with known
optima, and then use the model to recognize optimal solutions to
new instances. A typical task on supervised
861
learning is to predict a target numeric value, given a set of
features, called predictors. This description of the task is called
regression.
Transfer Learning is a popular technique often used to classify
vectors. Essentially, one would train a convolutional neural
network (CNN) on a very large dataset, for example ImageNet
[9], and then fine-tune the CNN on a different vector dataset. The
good part here is, training on the large dataset is already done by
some people who offer the learned weights for public research
use. The dataset can change during the implementation, but the
strength of AI is that it does not depend on fixed rules; therefore
adapting the model to changes in time is done by retraining the
model with the augmented or modified dataset.
Another typical task of Supervised Learning is regression or
prediction, where the task is to predict a target numerical value,
given a set of features/attributes, also called predictors. The key
difference between classification is that with ML algorithms like
Logistic Regression, the model can output the probability of that
certain value belongs to a given class. This type of system is
trained with multiple examples of a class, along with their label,
and the model must learn how to classify new instances.
LTE small cells are increasingly being deployed in 5G
networks to cope with the high traffic demands. These small-
scale cells are characterized by its unpredictable and dynamic
interference patterns, expanding the demand for self-optimized
solutions that can lead to lower drops, higher data rates, and
lower cost for the operators. Self-organizing networks (SON) are
expected to learn and dynamically adapt to different
environments. For the selection of optimal network
configuration in SONs, several AI-based fixes had been
discussed. In [10], machine learning and statistical regression
techniques are evaluated (bagging tree, boosted tree, SVM, linear
regressors, etc), gathering radio performance metrics like path
loss and throughput for particular frequencies and bandwidth
settings from the cells, and adjusting the parameters using
learning-based approaches to predict the performance that a user
will experience, given previous performance measurement
instances/samples. The authors showed that the learning-based
dynamic frequency and bandwidth allocation (DFBA) prediction
methods yield outstanding performance gains, with bagging tree
prediction method as the most promising approach to increase
the capacity of next-generation cellular networks. An extensive
interest in path-loss prediction has raised since researchers
noticed the power of AI to model more efficient
and accurate
path-loss models based on publicly available datasets [11]. The
use of AI has been proved to provide adapt- ability to network
designers who rely on signal propagation models. Timoteo et al.
[12], proposed a path loss prediction model for urban
environments using support vector regression to ensure an
acceptable level of quality of service (QoS) for wireless network
users. They employed different kernels and parameters over the
Okumura-Hata model and Ericsson 9999 model, and obtained
similar results as a complex neural network, but with a lower
computational complexity.
Wireless communications count actively on channel state
information (CSI) to make an acquainted decision in the
operations of the network, and during signal processing. Liu et
al. [13], investigated the unobservable CSI for wireless
communications and proposed a neural-network-based
approximation for channel learning, to infer this unobservable
information, from an observable channel. Their framework was
built upon the dependence between channel responses and
location information. To build the supervised learning
framework, they train the network with channel samples, where
the unobservable metrics can be calculated from traditional pilot-
aided channel estimation. The applications of their work can be
extended to cell selection in multi-tier networks, device
discovery for device-to-device (D2D) communications, or end-
to-end user association for load balancing, among others.
Sarigiannidis et al. [14], used a machine-learning framework
based on supervised learning on a Software-Defined-Radio-
enabled hybrid optical wireless network. The machine-learning
framework receives the traffic-aware knowledge from the SDN
controllers and adjusts the uplink-downlink configuration in the
LTE radio communication. The authors argue that their
mechanism is capable of determining the best configuration
based on the traffic dynamics from the hybrid network, offering
significant network improvements in terms of jitter and latency.
A commonly AI architecture used to model or approximate
objective functions for existing models or to create accurate
models that were impossible to represent in the past without the
intervention of learning machines, is Artificial Neural Networks
(ANN). ANNs have been proposed to solve propagation loss
estimation in dynamic environments, where the input parameters
can be selected from the information of the transmitter, receiver,
buildings, frequency, and so on, and the learning network will
train on that data to learn to estimate the function that best
approximates the propagation loss for next-generation wireless
networks [15][18]. In the same context, Ayadi et al. [19],
proposed a multi-layer perceptron (MLP) architecture to predict
coverage for either short or long distance, in multiple
frequencies, and in all environmental types. The MLP presented
uses feed-forward training with back propagation to update the
weights of the ANN. They used the inputs of the ITU-R P.1812-
4 model [20], to feed their network composed by an input layer,
a hidden layer, and one output layer. They showed that the ANN
model is more accurate to predict coverage in outdoor
environments than the ITU model, using the standard deviation
and correlation factor as a comparison measure.
Among other AI techniques with potential for wireless
communications, there are K-Nearest Neighbors, Logistic
Regression, Decision Trees and Random Forests. Table I, shows
a summary of the potential applications of supervised learning in
5G wireless communication technologies.
III.
U
NSUPERVISED
L
EARNING IN
5G M
OBILE AND
W
IRELESS
C
OMMUNICATIONS
T
ECHNOLOGY
In unsupervised learning, the training data is unlabeled, and the
system attempts to learn without any guidance. This technique is
particularly useful when we want to detect groups of similar
characteristics. At no point, we tell the algorithm to try to detect
groups of related attributes; the algorithm solves this connection
without intervention. However, in some cases, we can select the
number of clusters we want the algorithm to create.
862
Balevi et al. [21], incorporated fog networking into
heterogeneous cellular networks and used an unsupervised soft-
clustering algorithm to locate the fog nodes that are upgraded
from low power nodes (LPNs) to high power nodes (HPNs). The
authors showed that by applying machine learning clustering to
a priori known data like the number of fog nodes and location
of all LPNs within a cell, they were able to determine a clustering
configuration that reduced latency in the network. The latency
calculation was performed with open- loop communications,
with no ACK for transmitted packets, and compared to the
Voronoi tessellation model, a classical model based on Euclidean
distance.
A typical unsupervised learning technique is K-means
clustering; numerous authors have investigated the applications
of this particular clustering technique in the next generation
wireless network system. Sobabe et al. [22], proposed a
cooperative spectrum-sensing algorithm using a combination of
an optimized version of K-means clustering, Gaussian mixture
model and expectation maximization (EM) algorithm. They
proved that their learning algorithm outperformed the energy
vector-based algorithm. Song et al. [23], discussed how K-means
clustering algorithm and its classification capabilities can aid in
selecting an efficient relay selection among urban vehicular
networks. The authors investigated the methods for multi-hop
wireless broadcast and how K-means is a key factor in the
decision-making and learning steps of the base stations, that
learn from the distribution of the devices and chooses
automatically which are the most suitable devices to use as a
relay.
When a wireless network experience unusual traffic demand
at a particular time and location, it is often called an anomaly,
To help identify these anomalies, Parwez et al. [24], used mobile
network data for anomaly detection purposes, with the help of
hierarchical clustering to identify this kind of inconsistencies.
The authors claim that the detection of this data deviations helps
to establish regions of interest in the network that require special
actions, such as resource allocation, or fault avoidance
solutions.
Ultra-dense small cells (UDSC) is expected to increase the
capacity of the network, spectrum and energy efficiency. To
consider the effects of cell switching, dynamic interference,
time-varying user density, dynamic traffic patterns, and
changing frequencies, Wang et al. [25], proposed a data-driven
resource management for UDSC using Affinity Propagation, an
unsupervised learning clustering approach, to perform data
analysis and extract the knowledge and behavior of the system
under complex environments. Later they introduced a power
control and channel management system based on the results of
the unsupervised learning algorithm. They conclude their
research stating that by means of simulation, their data-driven
resource management framework significantly improved the
efficiency of the energy and throughput in UDSC.
Alternative clustering models like K-Means, Mini- Batch K-
Means, Mean-Shift clustering, DBSCAN, Agglomerative
Clustering, etc., can be used to associate the users to a certain
base station in order to optimize the user equipment (UE) and
base stations (BS) transmitting/receiving power. Table II, shows
a summary of the potential applications of unsupervised learning
in 5G wireless communication technologies.
IV.
R
EINFORCEMENT
L
EARNING IN
5G M
OBILE AND
W
IRELESS
C
OMMUNICATIONS
T
ECHNOLOGY
The philosophy of Reinforcement Learning scheme is based
on a learning system often called agent, that reacts to the
environment. The agent performs actions and gets rewards or
penalties (negative rewards) in return for its actions. That means
that the agent has to learn by itself creating a policy that defines
the action that the agent should choose in a certain situation. The
aim of the reinforcement-learning task is to maximize the
aforementioned reward over time.
Resource allocation in Long Term Evolution (LTE) net- works has
been a dilemma since the technology was introduced. To
overcome the wireless spectrum scarcity in 5G, novel deep
learning approaches that account the coexistence of LTE and
LTE-Unlicensed, to model the resource allocation problem in
LTE-U small base stations (SBS), has been introduced in [26].
To accomplish their contribution, the authors introduced a
reinforcement-learning algorithm based on long short-term
memory (RL-LSTM) cells to allocate pro- actively the resources
of LTE-U over the unlicensed spectrum. The problem formulated
reassembles a non-cooperative game between the SBSs, where a
RL-LSTM framework enables the SBSs to learn automatically
which of the unlicensed channels to use, based on the probability
of future changes in terms of the WLAN activity and the LTE-U
traffic loads of the unlicensed channels. This work takes into
account the value of LTE-U as a proposal that allows cellular
network operators to offload some of their data traffic, and the
connotation of AI in the form of RL-LSTM, as a promising
solution to long-term dependencies learning, sequence, and time-
series problems. Nevertheless, researchers should be warned that
this deep learning architecture is one of the most difficult to train,
due to the vanishing and exploding gradient problem in Recurrent
Neural Networks (RNN) [27], the speed of activation functions,
as well as the initialization of parameters for LSTM systems [28].
Reinforcement Learning has also played an important role on
Heterogeneous Networks (HetNets), enabling Femto Cells (FCs)
to autonomously and opportunistically sense the radio
environment and tune their parameters accordingly to satisfy
specific pre-set quality-of-service requirements. Alnwaimi et al.
[29], proved that by using reinforcement learning for the
femtocells self-configuration, based on dynamic-learning games
for a multi-objective fully-distributed strategy, the intra/inter-tier
interference can be reduced significantly. The collision and
reconfiguration measurements were used as a "learning cost
during training. This self-organizing potential, empower FCs to
identify available spectrum for opportunistic use, based on the
learned parameters.
863
TABLE I
S
UMMARY OF
S
UPERVISED
L
EARNING
-
BASED SCHEMES FOR
5G M
OBILE AND
W
IRELESS
C
OMMUNICATIONS
T
ECHNOLOGY
.
AI Technique
Learning Model
5G-based Applications
Supervised Learning
Machine Learning and statistical logistic
regression techniques.
Dynamic frequency and bandwidth allocation in self-organized LTE dense small
cell deployments (as in [10]).
Support Vector Machines (SVM).
Path loss prediction model for urban environments (as in [ 12]).
Neural-Netw ork-based a pproximation.
Channel Learning to infer unobservable channel state information (CSI) from
an observable channel (as in [13]).
Supervised Machine Learning Frameworks.
Adjustment of the TDD Uplink-Downlink configuration in XG-PON-LTE
Systems to maximize the network performance base d on the ongoing traffic
conditions in the hybrid optical-wirele ss networ k (as in [14]).
Artificial Neural Networks (ANN), and
Multi-Layer Perceptrons (MLPs).
Modelling and approximations of objective functions for link budget and
propagation loss for next-generation wireless networks (as in [15][19]).
TABLE II
S
UMMARY OF
U
NSUPERVISED
L
EARNING
-
BASED SCHEMES FOR
5G M
OBILE AND
W
IRELESS
C
OMMUNICATIONS
T
ECHNOLOGY
.
AI Technique
Learning Model
5G-based Applications
Unsupervised Learning
K-means
clustering, Gaussian Mixture
Model (GMM), and Expectation
Maximization (EM).
Cooperative spectrum sensing (as in [22]). Relay node selection in vehicular
networks (as in [23]).
Hierarchical Clustering.
Anomaly/Fault/Intrusion detection in mobile wireless networks (as i n [24]).
Unsupervised Soft-Clustering
Machine
Learning Framework.
Latency reduction by clustering fog nodes to automatically decide which low
power node (LPN) is upgraded to a high power node (HPN) in heterogeneous
cellular networks. (as in [21]).
Affinity Propagation Clustering.
Data-Driven Re source Ma nagement for Ultra-Dense Small Cells (as in [25]).
TABLE III
S
UMMARY OF
R
EINFORCEMENT
L
EARNING
-
BASED SCHEMES FOR
5G M
OBILE AND
W
IRELESS
C
OMMUNICATIONS
T
ECHNOLOGY
.
AI Technique
Learning Model
5G-based Applications
Reinforce ment Learning
Reinforce ment Learning algorithm based on
long short-term memory (RL-LSTM) cells.
Proactive resource allocation in LTE-U Networks, formulated as a non-
cooperative game which enables SBSs to learn which unlicensed channel, given
the long-term WLAN activity in the channels and LTE-U traffic loads (as in
[26]).
Gradient f ollower (GF), the modified Roth-
Erev (MRE), and the modified Bush and
Mosteller (MBM).
Enable Femto-Cells (FCs) to autonomously and opportuni stically sense the
radio environment a nd tune t heir parameters i n HetNets, to reduce intra/inter-
tier interference (as in [29]).
Reinforcement Learning with Network
assisted feedback.
Heterogene ous Radio Access Technologies (RATs) selection (as in [30]).
5G wireless networks will also contain multiple radio access
technologies (RAT). However, selecting the right RAT is a latent
problem in terms of speed, exploration times, and
convergence. Nguyen et al. [30], developed a feedback
framework using limited network-assisted information from the
base stations (BS), to improve the efficiency of distributed
algorithms for RAT selection. The framework used
reinforcement learning with network-assisted feedback to
overcome the aforementioned problems. Table III, shows a
summary of the potential applications of reinforcement learning
in 5G wireless communication technologies.
V.
C
ONCLUSION
After exploring some of the successful cases where AI is used
as a tool to improve 5G technologies, we strongly believe that
the convergence between these two knowledge expertises will
have an enormous impact in the development of future
generation networks. The era where wireless networks
researchers were afraid to use AI-based algorithms due to the
lack of understanding of the artificial-learning process, has been
left in the past. Nowadays, with the power and ubiquity of
information, numerous researchers are adapting their knowledge
and expanding their tools arsenal with AI-based models,
algorithms and practices, especially in the 5G world, where even
a few milliseconds of latency can make a difference. A reliable
5G system requires extremely low latency, which is why not
everything can be stored in remote cloud servers far away.
Latency increases with distance and congestion of network links.
Base stations have limited storage size, so they have to learn to
predict user needs by applying a variety of artificial intelligence
tools. With these tools, every base station will be able to store a
reduced but adequate set of files or contents. This is one example
why our future networks must be predictive, and how Artificial
864
Intelligence becomes crucial in optimizing this kind of
problems in the network. An additional goal of linking AI with
5G networks would be to obtain significant improvements in the
context of edge caching just by applying off-the-shelf machine
learning algorithms. We have shown how AI can be a solution
that can fill this gap of requirements in 5G mobile and wireless
communications, allowing base stations to predict what kind of
content users nearby may request in the near future, allocating
dynamic frequencies in self-organized LTE dense small cell
deployments, predicting path loss/link budget with
approximated NN models, inferring the unobservable channel
state information from an observable channel, adjusting the
TDD uplink-downlink configuration in XG-PON-LTE systems
based on ongoing network conditions, sensing the spectrum
using unsupervised models, reducing the latency by
automatically configuring the clusters in Het-Nets, detecting
anomalies/faults/intrusions in mobile wireless networks,
managing the resources in ultra-dense small cells, selecting the
relay nodes in vehicular networks, allocating the resources
in LTE-U networks, enabling autonomous and opportunistic
sensing of the radio environment in femto-cells, selecting the
optimal radio access technology (RAT) in HetNets, among
others.
A
CKNOWLEDGMENT
This work was supported by the Global Excellent Technology
Innovation Program (10063078) funded by the Ministry of
Trade, Industry and Energy (MOTIE) of Korea; and by the
National Research Foundation of Korea (NRF) grant funded by
the Korea government (MSIP; Ministry of Science, ICT &
Future Planning) (No. 2017R1C1B5016837).
R
EFERENCES
[1]
A. Geron, “Hands-on machine learning with Scikit-Learn and
TensorFlow: concepts, tools, and techniques to build intelligent
systems,” p. 543, 2017. [Online]. Available:
http://shop.oreilly.com/ product/0636920052289.do
[2]
A. Osseiran, J. F. Monserrat, and P. Marsch, 5G Mobile and Wireless
Communications Technology, 1st ed. United Kingdom: Cambridge
University Press, 2017. [Online]. Available: www.cambridge.org/
9781107130098
[3]
R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang,
“Intelligent 5G: When Cellular Networks Meet Artificial
Intelligence,” IEEE Wireless Communications, vol. 24, no. 5, pp.
175183, 2017.
[Online]. Available:
http://www.rongpeng.info/files/Paper_wcm2016.pdf
[4]
T. E. Bogale, X. Wang, and L. B. Le, “Machine Intelligence
Techniques for Next-Generation Context-Aware Wireless
Networks,” ITU Special Issue: The impact of Artificial Intelligence
(AI) on communication networks and services., vol. 1, 2018.
[Online]. Available:
https://arxiv.org/pdf/1801.04223.pdfhttp://arxiv.org/abs/1801.0422
3
[5]
C. Jiang, H. Zhang, Y. Ren, Z. Han, K. C. Chen, and L. Hanzo,
“Machine Learning Paradigms for Next-Generation Wireless
Networks,” IEEE Wireless Communications, 2017.
[6]
G. Villarrubia, J. F. De Paz, P. Chamoso, and F. D. la Prieta,
“Artificial neural networks used in optimization problems,”
Neurocomputing, vol. 272, pp. 1016, 2018.
[7]
T. M. Mitchell, Machine Learning, 1st ed. McGraw-
Hill Science/Engineering/Math, 1997.
[On-
line]. Available: https://www.cs.ubbcluj.ro/~gabis/ml/ml-books/
McGrawHill-MachineLearning-TomMitchell.pdf
[8]
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, 1st
ed.,
T. Dietterich, Ed. London, England: The MIT Press, 2016. [Online].
Available: www.deeplearningbook.org
[9]
P. U. Stanford Vision Lab, Stanford University, “ImageNet.”
[Online].
Available: http://image-net.org/about-overview
[10] B. Bojovic´, E. Meshkova, N. Baldo, J. Riihijärvi, and M.
Petrova,
“Machine learning-based dynamic frequency and
bandwidth allocation in self-organized LTE dense small cell
deployments,” Eurasip Journal on Wireless Communications and
Networking, vol. 2016, no. 1, 2016. [Online]. Available:
https://jwcn-eurasipjournals.springeropen.
com/track/pdf/10.1186/s13638-016-0679-0
[11]
S. Y. Han, N. B. Abu-ghazaleh, and S. Member, “Efficient and Con-
sistent Path loss Model for Mobile Network Simulation,”
IEEE/ACM Transactions on Networking, vol. PP, no. 99, pp. 1–1,
2015.
[12]
R. D. a. Timoteo, D. C. Cunha, and G. D. C. Cavalcanti, “A Proposal
for Path Loss Prediction in Urban Environments using Support
Vector Re- gression,” Advanced International Conference on
Telecommunications, vol. 10, no. c, pp. 119124, 2014.
[13]
J. Liu, R. Deng, S. Zhou, and Z. Niu, “Seeing the unobservable:
Channel learning for wireless communication networks,” 2015
IEEE Global Communications Conference, GLOBECOM 2015,
2015.
[14]
P. Sarigiannidis, A. Sarigiannidis, I. Moscholios, and P.
Zwierzykowski, “DIANA: A Machine Learning Mechanism for
Adjusting the TDD Uplink-Downlink Configuration in XG-PON-
LTE Systems,” Mobile Information Systems, vol. 2017, no. c, 2017.
[15]
S. P. Sotiroudis, S. K. Goudos, K. A. Gotsis, K. Siakavara, J. N.
Sahalos, and L. Fellow, “Application of a Composite Differential
Evolution Algorithm in Optimal Neural Network Design for
Propagation Path-Loss Prediction in Mobile Communication
Systems,” IEEE ANTENNAS AND WIRELESS PROPAGATION
LETTERS, vol. 12, pp. 364367, 2013.
[16]
J. M. Mom, C. O. Mgbe, and G. A. Igwue, “Application of Artificial
Neural Network For Path Loss Prediction In Urban Macrocellular
Envi- ronment,” American Journal of Engineering Research
(AJER), vol. 03, no. 02, pp. 270275, 2014.
[17]
I. Popescu, D. Nikitopoulos, I. Nafornita, and P. Constantinou,
“ANN prediction models for indoor environment,” IEEE
International Confer- ence on Wireless and Mobile Computing,
Networking and Communica- tions 2006, WiMob 2006, pp. 366
371, 2006.
[18]
S. P. Sotiroudis, K. Siakavara, and J. N. Sahalos, “A Neural Network
Approach to the Prediction of the Propagation Path-loss for Mobile
Communications Systems in Urban Environments,” PIERS Online,
vol. 3, no. 8, pp. 11751179, 2007.
[19]
M. Ayadi, A. Ben Zineb, and S. Tabbane, “A UHF Path Loss Model
Us- ing Learning Machine for Heterogeneous Networks,” IEEE
Transactions on Antennas and Propagation, vol. 65, no. 7, pp.
36753683, 2017.
[20]
International Telecommunication Union, “A path-specific
propagation prediction method for point-to-area terrestrial services
in the VHF and UHF bands,” ITU P-Series Radiowave propagation,
no. P.1812-4, pp. 135, 2015. [Online]. Available:
https://www.itu.int/dms_pubrec/itu-r/ rec/p/R-REC-P.1812-4-
201507-I!!PDF-E.pdf
[21]
E. Balevi and R . D. Gitlin, “Unsupervised machine learning in 5G
networks for low latency communications,” 2017 IEEE 36th
Inter
national Performance Computing and Communications
Conference, IPCCC 2017, vol. 2018-Janua, pp. 12, 2018.
[22] G. Sobabe, Y. Song, X. Bai, and B. Guo, “A cooperative spectrum
sensing algorithm based on unsupervised learning,” 10th International
Congress on Image and Signal Processing, BioMedical Engineering
and Informatics (CISP-BMEI 2017), vol. 1, pp. 198201, 2017.
[23] W. Song, F. Zeng, J. Hu, Z. Wang, and X. Mao, “An Unsupervised-
Learning-Based Method for Multi-Hop Wireless Broadcast Relay Se-
lection in Urban Vehicular Networks,” IEEE Vehicular Technology
865
Conference, vol. 2017-June, 2017.
[24] M. S. Parwez, D. B. Rawat, and M. Garuba, “Big data analytics for
user-activity analysis and user-anomaly detection in mobile wireless
network,” IEEE Transactions on Industrial Informatics, vol. 13, no. 4,
pp. 20582065, 2017.
[25] L.-C. Wang and S. H. Cheng, “Data-Driven Resource Management
for Ultra-Dense Small Cells: An Affinity Propagation Clustering
Approach,” IEEE Transactions on Network Science and Engineering,
vol. 4697, no. c, pp. 11, 2018. [Online]. Available:
https://ieeexplore.ieee.org/document/8369148/
[26] U. Challita, L. Dong, and W. Saad, “Deep learning for proactive
resource allocation in LTE-U networks,” in European Wireless 2017-
23rd European Wireless Conference, 2017. [Online]. Available:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8011311
[27] R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty
of training recurrent neural networks,” Tech. Rep., 2013.
[Online]. Available:
http://proceedings.mlr.press/v28/pascanu13.pdf?
spm=5176.100239.blogcont292826.13.57KVN0&file=pascanu13.pdf
[28] Q. V. Le, N. Jaitly, and G. E. Hinton Google, “A Simple Way to
Initialize Recurrent Networks of Rectified Linear Units,” Tech. Rep.,
2015. [Online]. Available: https://arxiv.org/pdf/1504.00941v2.pdf
[29] G. Alnwaimi, S. Vahid, and K. Moessner, “Dynamic heterogeneous
learning games for opportunistic access in LTE-based
macro/femtocell deployments,” IEEE Transactions on Wireless
Communications, vol. 14, no. 4, pp. 22942308, 2015.
[30] D. D. Nguyen, H. X. Nguyen, and L. B. White, “Reinforcement
Learning with Network-Assisted Feedback for Heterogeneous RAT
Selection,” IEEE Transactions on Wireless Communications, vol. 16,
no. 9, pp. 60626076, 2017.
... Furthermore, overfitting/underfitting should be checked at each time a model is formed, in order to prevent inserting errors, making it unable to depict properly all the attributes of the tested dataset. Underfitting occurs when the model is not able to obtain a low error on the training set [78]. This means that the model cannot describe all the characteristics in the dataset. ...
... In that way, the agent creates a policy to set up its own learning scheme and decide which actions to choose in a certain situation. The aim of the RL task is to maximize the reward over time [78]. ...
... Potential Solutions -Suggestions for Future Work networks' KPIs and ML KPIs joint evaluation [78], [86] ML methods' evaluation in terms of network performance [79], [97] 5G datasets unavailability/poor quality [101], [116] research work in dataset generators, real-world data availability [111], [113] channel complexity [103] DL approaches [113] computational time and cost [81], [90], [96] distributed DL, use of MEC and FL [82], [87] energy consumption [38] MEC, Green AI techniques [38] presented in section IV, some open questions and practical challenges persist, requiring even more effort in the field of ML-based RRM, to reach its full potential. The critical issues that should be taken into consideration are highlighted below and summarized in Table 5. 1) 5G and B5G networks utilize ML-based algorithms to phase the growing number of usage scenarios in access management. ...
Article
Full-text available
In this survey, a comprehensive study is provided, regarding the use of machine learning (ML) algorithms for effective resource management in fifth-generation and beyond (5G/B5G) wireless cellular networks. The ever-increasing user requirements, their diverse nature in terms of performance metrics and the use of various novel technologies, such as millimeter wave transmission, massive multiple-input-multiple-output configurations and non-orthogonal multiple access, render the multi-constraint nature of the radio resource management (RRM) problem. In this context, ML and mobile edge computing (MEC) constitute a promising framework to provide improved quality of service (QoS) for end users, since they can relax the RMM-associated computational burden. In our work, a state-of-the-art analysis of ML-based RRM algorithms, categorized in terms of learning type and potential applications as well as MEC implementations,is presented, to define the best-performing solutions for various RRM sub-problems. To demonstrate the capabilities and efficiency of ML-based algorithms in RRM, we apply and compare different ML approaches for throughput prediction, as an indicative RRM task. We investigate the problem, either as a classification or as a regression one, using the corresponding metrics in each occasion. Finally, open issues, challenges and limitations concerning AI/ML approaches in RRM for 5G and B5G networks, are discussed in detail.
... Although the figures in terms of the growing number of IoT devices and BSs along with increasing demand for data-oriented applications have been discussed negatively so far, there are some positive impacts as well. The volume of data being generated by cellular networks is also growing considerably, making it a gold mine for network operators to exploit in such a way that more efficient management can be facilitated (Sun et al., 2019b;Morocho Cayamcela & Lim, 2018;Zhang et al., 2019;Öztürk, 2020;Mollel et al., 021b). In other words, although growing network sizes results in more complexity, the immense data volume generation becomes a key to alleviate such complexity: this so-called challenge brings its own opportunity and solution. ...
Thesis
Full-text available
The rapid growth in mobile and wireless devices has led to an exponential demand for data traffic and exacerbated the burden on conventional wireless networks. Fifth generation (5G) and beyond networks are expected to not only accommodate this growth in data demand but also provide additional services beyond the capability of existing wireless networks while maintaining a high quality of experience (QoE) for users. The need for several orders of magnitude increases in system capacity has necessitated the use of millimetre wave (mm-wave) frequencies as well as the proliferation of low-power small cells overlaying the existing macro-cell layer. These approaches offer a potential increase in throughput in magnitudes of several gigabits per second and a reduction in transmission latency, but they also present new challenges. For example, mm-wave frequencies have higher propagation losses and a limited coverage area, thereby escalating mobility challenges such as more frequent handovers (HOs). In addition, the advent of low-power small cells with smaller footprints also causes signal fluctuations across the network, resulting in repeated HOs (ping-pong) from one small cell (SC) to another. Therefore, efficient HO management is very critical in future cellular networks since frequent HOs pose multiple threats to the quality-of-service (QoS), such as a reduction in the system throughput as well as service interruptions, which results in a poor QoE for the user. However, HO management is a significant challenge in 5G networks due to the use of mm-wave frequencies with much smaller footprints. To address these challenges, this work investigates the HO performance of 5G mm-wave networks and proposes a novel method for achieving seamless user mobility in dense networks. The proposed model is based on a double deep reinforcement learning (DDRL) algorithm. To test the performance of the model, a comparative study was made between the proposed approach and benchmark solutions, including a benchmark developed as part of this thesis. The evaluation metrics considered include system throughput, execution time, ping-pong, and the scalability of the solutions. The results reveal that the developed DDRL-based solution vastly outperforms not only conventional methods but also other machine-learning-based benchmark techniques. The main contribution of this thesis is to provide an intelligent framework for mobility management in the connected state (i.e. HO management) in 5G. Though primarily developed for mm-wave links between UEs and BSs in ultra-dense heterogeneous networks (UDHNs), the proposed framework can also be applied to sub-6 GHz frequencies.
... Artificial Intelligence (AI) plays a crucial role in achieving these requirements by being integrated into applications throughout all levels of the network. AI is one of the key drivers for next-generation wireless networks to improve network applications' efficiency, latency, and reliability [3]. AI is also applied to channel estimation applications, which is one of the fundamental prerequisites in wireless networks. ...
Article
Full-text available
Future wireless networks (5G and beyond), also known as Next Generation or NextG, are the vision of forthcoming cellular systems, connecting billions of devices and people together. In the last decades, cellular networks have dramatically grown with advanced telecommunication technologies for high-speed data transmission, high cell capacity, and low latency. The main goal of those technologies is to support a wide range of new applications, such as virtual reality, metaverse, telehealth, online education, autonomous and flying vehicles, smart cities, smart grids, advanced manufacturing, and many more. The key motivation of NextG networks is to meet the high demand for those applications by improving and optimizing network functions. Artificial Intelligence (AI) has a high potential to achieve these requirements by being integrated into applications throughout all network layers. However, the security concerns on network functions of NextG using AI-based models, i.e., model poising, have not been investigated deeply. It is crucial to protect the next-generation cellular networks against cybersecurity threats, especially adversarial attacks. Therefore, it needs to design efficient mitigation techniques and secure solutions for NextG networks using AI-based methods. This paper proposes a comprehensive vulnerability analysis of deep learning (DL)-based channel estimation models trained with the dataset obtained from MATLAB’s 5G toolbox for adversarial attacks and defensive distillation-based mitigation methods. The adversarial attacks produce faulty results by manipulating trained DL-based models for channel estimation in NextG networks while making models more robust against attacks through mitigation methods. This paper also presents the performance of the proposed defensive distillation mitigation method for each adversarial attack. The results indicate that the proposed mitigation method can defend the DL-based channel estimation models against adversarial attacks in NextG networks.
... The whole landscape is reconducted to four main Paradigm Shifts (PSs), within which the disruption that 6G will bring at various levels can be reconducted. The first PS has to do with the scattering of Artificial Intelligence (AI) in any part of the network, from the core to the edge, with the twofold target of supporting both the service end of the network, along with the its operation, the latter one marking an unprecedented leap with respect to the previous generations [16]- [20]. The second PS deals with the 6G target of reaching global coverage, that is an extended and diversified infrastructure, featuring classical terrestrial radio towers, integrated with satellites, Unmanned Air Vehicles (UAVs), along with an innovative over-sea and under-sea infrastructure, realizing a space-air-ground-sea integrated network, ensuring the same quality and quantity of services, regardless of the particular location at stake, if a metropolitan, rather than rural or remote area [21], [22]. ...
Chapter
Full-text available
The currently under-deployment 5G, as well as the future 6G and Super-IoT paradigms, is demanding and will go on demanding for high-performance, frequency agile, and reliable RF passive components, ranging from simple switches to articulated devices, phase shifters, impedance matching tuners, RF power step attenuators, filters, and so on, with pronounced characteristics of reconfigurability and/or tunability. RF-MEMS is one of the most suitable technologies able to meet these challenges, as its recent market absorption is demonstrating. In this paper, we discuss a novel design of switched capacitor/varactor entirely designed in RF-MEMS technology, optimized against a mitigation of the activation (pull-in) voltage, as well as an increase of the ON-state capacitance. In particular, multi-physical simulations are reported and discussed, after having validated the Finite Element Method (FEM) tools against experimental datasets. Moreover, physical samples are currently under fabrication and will be reported in the final paper.KeywordsRF-MEMSRF passiveVaractorSwitched capacitor5G6GIoTSuper-IoTMulti-physical simulation
... Artificial Intelligence (AI) plays a crucial role to achieve these requirements by being integrated in applications throughout all levels of the network. AI is one of key drivers for next-generations wireless networks to improve the efficiency, latency, and reliability of the network applications [3]. AI is also applied into the channel estimation applications, which is one of the fundamental prerequisites in wireless networks. ...
Preprint
Full-text available
Future wireless networks (5G and beyond) are the vision of forthcoming cellular systems, connecting billions of devices and people together. In the last decades, cellular networks have been dramatically growth with advanced telecommunication technologies for high-speed data transmission, high cell capacity, and low latency. The main goal of those technologies is to support a wide range of new applications, such as virtual reality, metaverse, telehealth, online education, autonomous and flying vehicles, smart cities, smart grids, advanced manufacturing, and many more. The key motivation of NextG networks is to meet the high demand for those applications by improving and optimizing network functions. Artificial Intelligence (AI) has a high potential to achieve these requirements by being integrated in applications throughout all layers of the network. However, the security concerns on network functions of NextG using AI-based models, i.e., model poising, have not been investigated deeply. Therefore, it needs to design efficient mitigation techniques and secure solutions for NextG networks using AI-based methods. This paper proposes a comprehensive vulnerability analysis of deep learning (DL)-based channel estimation models trained with the dataset obtained from MATLAB's 5G toolbox for adversarial attacks and defensive distillation-based mitigation methods. The adversarial attacks produce faulty results by manipulating trained DL-based models for channel estimation in NextG networks, while making models more robust against any attacks through mitigation methods. This paper also presents the performance of the proposed defensive distillation mitigation method for each adversarial attack against the channel estimation model. The results indicated that the proposed mitigation method can defend the DL-based channel estimation models against adversarial attacks in NextG networks.
... A limitation of the paper is that it did not discuss deep learning techniques. [30] This paper explored the potentials of artificial intelligence-based solutions for 5G technologies. The survey is very limited. ...
Preprint
Full-text available
The convergence of 5G architecture and deep learning has gained a lot of research interests in both the fields of wireless communication and artificial intelligence. This is because deep learning technologies have been identified to be the potential driver of the 5G technologies, that make up the 5G architecture. Hence, there have been extensive surveys on the convergence of 5G architecture and deep learning. However, most of the existing survey papers mainly focused on how deep learning can converge with a specific 5G technology, thus, not covering the full spectrum of the 5G architecture. Although there is a recent survey paper that appears to be robust, a review of that paper shows that it is not well structured to specifically cover the convergence of deep learning and the 5G technologies. Hence, this paper provides a robust overview of the convergence of the key 5G technologies and deep learning. The challenges faced by such convergence are discussed. In addition, a brief overview of the future 6G architecture, and how it can converge with deep learning is also discussed.
... 15. Numerous authors point to the many ways in which AI is critical to the realisation of the 5G vision (see, e.g., Cayamcela & Lim, 2018;Dogra, Jha, & Jain, 2020;Qiao et al., 2021;You et al., 2019). In addition to AI, numerous scholars have been investigating the application of Smart Contract technologies for managing 5G, including spectrum sharing (see Nguyen et al., 2020;Weiss et al., 2019;Zhou et al., 2020). ...
Chapter
In forthcoming networks for high definition radio large bandwidth, low latency and several emergence applications like e-health, Industrial IOT, smart transportation etc. will be conquered by 5G networks. Therefore, more capacity and consequently efficient spectrum sensing will be an awful prerequisite for huge demand for certain applications. In this field of research, attempts have been made to develop new techniques to improve the reliability and channel capacity in 5G Networks using WOA, LDPC and Cognitive Concepts. The results have been presented in the form of various plots and graphs.KeywordsCognitive Radio (CR)Fifth Generation (5G)Low-Density Parity Check (LDPC)Whale Optimization Algorithm (WOA)
Article
Full-text available
The dynamic nature of wireless links and the mobility of devices connected to the Internet of Things (IoT) over fifth-generation (5G) networks (IoT-5G), on the one hand, empowers pervasive healthcare applications. On the other hand, it allows eavesdroppers and other illegitimate actors to access secret information. Due to the poor time efficiency and high computational complexity of conventional cryptographic methods and the heterogeneous technologies used, it is easy to compromise the authentication of lightweight wearable and healthcare devices. Therefore, intelligent authentication, which relies on artificial intelligence (AI), and sufficient network resources are extremely important for securing healthcare devices connected to IoT-5G. This survey considers intelligent authentication and includes a comprehensive overview of intelligent authentication mechanisms for securing IoT-5G devices deployed in the healthcare domain. First, it presents a detailed, thoughtful, and state-of-the-art review of IoT-5G, healthcare technologies, tools, applications, research trends, challenges, opportunities, and solutions. We selected 20 technical articles from those surveyed based on their strong overlaps with IoT, 5G, healthcare, device authentication, and AI. Second, IoT-5G device authentication, radio-frequency fingerprinting, and mutual authentication are reviewed, characterized, clustered, and classified. Third, the review envisions that AI can be used to integrate the attributes of the physical layer and 5G networks to empower intelligent healthcare devices. Moreover, methods for developing intelligent authentication models using AI are presented. Finally, the future outlook and recommendations are introduced for IoT-5G healthcare applications, and recommendations for further research are presented as well. The remarkable contributions and relevance of this survey may assist the research community in understanding the research gaps and the research opportunities relating to the intelligent authentication of IoT-5G healthcare devices.
Article
Full-text available
Deploying dense small cells is the key to providing high capacity, but raise the serious issue of energy consumption and inter-cell interference. To understand the behaviors of ultra-dense small cells (UDSC) with dynamic interference and traffic patterns, this paper presents a data-driven resource management (DDRM) framework to implement power control and channel rearrangement in UDSC. We find that the inter-cell interference can be used to describe the affinity of cells. Thus, we propose an unsupervised learning algorithm for UDSC, called affinity propagation power control (APPC) mechanism. In principle, APPC first groups small cells into different clusters and identifies cluster centers. Next, the transmission power of a cluster center is decreased to reduce the interference to the neighboring cells' users in this cluster. Since lowering transmission power of a cluster center cell may cause the performance degradation to the users at the cell edge, a victim-aware channel rearrangement (VACR) mechanism is further designed to adjust the channel usage bandwidth of the neighboring cells in order to guarantee the quality of service of these victimized users. Our simulation results show that the DDRM framework can significantly improve energy efficiency and throughput in UDSC compared to the existing approaches.
Conference Paper
Full-text available
Spectrum sensing in cognitive radio is an essential problem and has been discussed a lot in recent years. In this paper, a cooperative sensing algorithm based on unsupervised learning is proposed. The unsupervised learning framework that does not require the training data to be labeled, i.e., K-means clustering and Gaussian mixture model, is introduced into the scheme. The more robust features including eigenvector and eigenvalue are fed into the classifier. Simulations clearly demonstrate the effectiveness of the proposed method.
Conference Paper
Full-text available
The next generation wireless networks (i.e. 5G and beyond), which would be extremely dynamic and complex due to the ultra-dense deployment of heterogeneous networks (HetNets), poses many critical challenges for network planning, operation, management and troubleshooting. At the same time, generation and consumption of wireless data are becoming increasingly distributed with ongoing paradigm shift from people-centric to machine-oriented communications, making the operation of future wireless networks even more complex. In mitigating the complexity of future network operation, new approaches of intelligently utilizing distributed computational resources with improved context-awareness becomes extremely important. In this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control, communication, and networking functions closer to end users, have a great potential for enabling efficient operation of future wireless networks. These promising architectures make the adoption of artificial intelligence (AI) principles which incorporate learning, reasoning and decision-making mechanism, as natural choices for designing a tightly integrated network. Towards this end, this article provides a comprehensive survey on the utilization of AI integrating machine learning, data analytics and natural language processing (NLP) techniques for enhancing the efficiency of wireless network operation. In particular, we provide comprehensive discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network planning, operation and management of the next generation wireless networks. A brief case study utilizing the AI techniques for this network has also been provided.
Article
Full-text available
Modern broadband hybrid optical-wireless access networks have gained the attention of academia and industry due to their strategic advantages. Namely they extend the network coverage in a cost-efficient way, they allow a larger number of potential subscribers than conventional access architectures and they exploit the huge bandwidth of the optical technology with the flexibility and mobility of wireless access networks. At the same time the proliferation of Software Defined Networking (SDN) enables the efficient reconfiguration of the underlying network components dynamically using SDN controllers. Hence, effective traffic-aware schemes are feasible in dynamically determining suitable configuration parameters for advancing the network performance. To this end, a novel machine learning mechanism is proposed for an SDN-enabled hybrid optical-wireless network. The proposed architecture consists of a 10-gigabit-capable passive optical network (XG-PON) in the network backhaul and multiple Long Term Evolution (LTE) radio access networks in the fronhaul. The proposed mechanism receives traffic-aware knowledge from the SDN controllers and applies an adjustment on the uplink-downlink configuration in the LTE radio communication. This traffic-aware mechanism is simple and capable of determining the most suitable configuration based on the traffic dynamics in the whole hybrid network. The introduced scheme is evaluated in a realistic environment using real traffic traces such as Voice over IP (VoIP), real-time video and streaming video. According to the obtained numerical results the proposed mechanism offers significant improvements in the network performance in terms of latency and jitter.
Article
Future wireless networks (e.g., 5G) will consist of multiple radio access technologies (RATs). In these networks, deciding which RAT users should connect to is not a trivial problem. Current fully distributed algorithms although guaranteeing convergence to equilibrium states, are often slow, require high exploration times and may converge to undesirable equilibria. To overcome these limitations, this paper develops a network feedback framework that uses limited network-assisted information to improve efficiency of distributed algorithms for RAT selection problem.We prove theoretically that a fully distributed algorithm developed within this framework is guaranteed to converge to a set of correlated equilibria. Our framework guarantees convergence in self-play even when only a single user applies the algorithm. Simulation results demonstrate that our solution (1) is highly efficient with fast convergence time and low signalling overheads whilst achieving competitive, if not better, performance both in fairness and utility, as well as achieving lower peruser switchings than state-of-the-art algorithms; and (2) can flexibly support a wide range of network-assisted feedback. The simulations demonstrate the effectiveness of our solution in a heterogeneous environment where users may potentially apply a number of different RAT selection procedures.
Article
Optimization problems often require the use of optimization methods that permit the minimization or maximization of certain objective functions. Occasionally, the problems that must be optimized are not linear or polynomial; they cannot be precisely resolved, and they must be approximated. In these cases, it is necessary to apply heuristics, which are able to resolve these kinds of problems. Some algorithms linearize the restrictions and objective functions at a specific point of the space by applying derivatives and partial derivatives for some cases, while in other cases evolutionary algorithms are used to approximate the solution. This work proposes the use of artificial neural networks to approximate the objective function in optimization problems to make it possible to apply other techniques to resolve the problem. The objective function is approximated by a non-linear regression that can be used to resolve an optimization problem. The derivate of the new objective function should be polynomial so that the solution of the optimization problem can be calculated.
Article
In this article, we present and evaluate a new propagation model for heterogeneous networks. The designed model is multiband, multi-environment, and is usable for short and long distance. For this research, a measurement campaign was conducted in Tunis (Tunisia) using continuous wave analog technology. It concerns the most used bands (450MHz, 850MHz, 1800MHz, 2100MHz, and 2600MHz) in rural, suburban, and urban environments. Measurements are split into two independent and random sets. The first one is used for model training, whereas, the second is used for model validation. The new model is based on neural networks, uses back propagation algorithm, and obtains its inputs from Standard Propagation Model, to which we have added more parameters such as frequency, environment type, land use distribution, and diffraction loss. Model variables are computed from accurate Digital Terrain Model and Land Used maps with 2-meters resolution. The statistical analysis has shown that the developed model is accurate as we obtained the following metrics: 0.235 dB absolute mean error, 6.850 dB standard deviation, and 85% correlation factor. The obtained simulation results are then compared to SPM and ITU-R P.1812-4 prediction which are taken as reference to highlight the benefit of the new model.
Article
5G cellular networks are assumed to be the key enabler and infrastructure provider in the ICT industry, by offering a variety of services with diverse requirements. The standardization of 5G cellular networks is being expedited, which also implies more of the candidate technologies will be adopted. Therefore, it is worthwhile to provide insight into the candidate techniques as a whole and examine the design philosophy behind them. In this article, we try to highlight one of the most fundamental features among the revolutionary techniques in the 5G era, i.e., there emerges initial intelligence in nearly every important aspect of cellular networks, including radio resource management, mobility management, service provisioning management, and so on. However, faced with ever-increasingly complicated configuration issues and blossoming new service requirements, it is still insufficient for 5G cellular networks if it lacks complete AI functionalities. Hence, we further introduce fundamental concepts in AI and discuss the relationship between AI and the candidate techniques in 5G cellular networks. Specifically, we highlight the opportunities and challenges to exploit AI to achieve intelligent 5G networks, and demonstrate the effectiveness of AI to manage and orchestrate cellular network resources. We envision that AI-empowered 5G cellular networks will make the acclaimed ICT enabler a reality.