Conference Paper

A Polynomial Neural Network Approach for the Outdated CQI Feedback Problem in 5G Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Machine learning (ML) has been recognized as a feasible and reliable technique for the modeling of multi-parametric datasets. In real applications, there are different relationships with various complexities between sets of inputs and their corresponding outputs. As a result, various models have been developed with different levels of complexity in the input–output relationships. The group method of data handling (GMDH) employs a family of inductive algorithms for computer-based mathematical modeling grounded on a combination of quadratic and higher neurons in a certain number of variable layers. In this method, a vector of input features is mapped to the expected response by creating a multistage nonlinear pattern. Usually, each neuron of the GMDH is considered a quadratic partial function. In this paper, the basic structure of the GMDH technique is adapted by changing the partial functions to enhance the complexity modeling ability. To accomplish this, popular ML models that have shown reasonable function approximation performance, such as support vector regression and random forest, are used, and the basic polynomial functions in the GMDH are replaced by these ML models. The regression feasibility and validity of the ML-based GMDH models are confirmed by computer simulation.
Article
Full-text available
The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: RAN, core network, and caching. We also present a general overview of 5G cellular networks composed of software defined network (SDN), network function virtualization (NFV), caching, and mobile edge computing (MEC) capable of meeting latency and other 5G requirements.
Article
Full-text available
WiFi-based positioning systems have recently received considerable attention, mainly because GPS is unavailable in indoor spaces and consumes considerable energy. On the other hand, predominant Smartphone OS localization subsystems currently rely on server-side localization processes, allowing the service provider to know the location of a user at all times. In this paper, we propose an innovative algorithm for protecting users from location tracking by the localization service, without hindering the provisioning of fine-grained location updates on a continuous basis. Our proposed Temporal Vector Map (TVM) algorithm, allows a user to accurately localize by exploiting a k-Anonymity Bloom (kAB) filter and a bestNeighbors generator of camouflaged localization requests, both of which are shown to be resilient to a variety of privacy attacks. We have evaluated our framework using a real prototype developed in Android and Hadoop HBase as well as realistic WiFi traces scaling-up to several GBs. Our analytical evaluation and experimental study reveal that TVM is not vulnerable to attacks that traditionally compromise k-anonymity protection and indicate that TVM can offer fine-grained localization in approximately four orders of magnitude less energy and number of messages than competitive approaches.
Article
Unmanned aerial vehicles (UAVs) control information delivery is a critical communication with stringent requirements in terms of reliability and latency. In this context, link adaptation plays an essential role in the fulfillment of the required performance in terms of decode error probability and delay. Link adaptation is usually based on channel quality indicator (CQI) feedback information from the user equipment that should represent the current state of the channel. However, measurement, scheduling and processing delays introduce a CQI aging effect, that is a mismatch between the current channel state and its CQI representation. Using outdated CQI values may lead to the selection of a wrong modulation and coding scheme, with a detrimental effect on performance. This is particularly relevant in ultra reliable and low latency communications (URLLC), where the control of the reliability can be negatively impacted, and it is more evident when the channel is fast varying as the case of UAVs. This paper analyzes the effects of CQI aging on URLLCs, considering transmissions under the finite blocklength regime, that characterizes such communications type. A deep learning approach is investigated to predict the next CQI from the knowledge of past reports, and performance in terms of decode error probability and throughput is given. The results show the benefit of CQI proposed prediction mechanism also in comparison with previously proposed methods.
Conference Paper
In this paper, we present a 5G trace dataset collected from a major Irish mobile operator. The dataset is generated from two mobility patterns (static and car), and across two application patterns (video streaming and file download). The dataset is composed of client-side cellular key performance indicators (KPIs) comprised of channel-related metrics, context-related metrics, cell-related metrics and throughput information. These metrics are generated from a well-known non-rooted Android network monitoring application, G-NetTrack Pro. To the best of our knowledge, this is the first publicly available dataset that contains throughput, channel and context information for 5G networks. To supplement our realtime 5G production network dataset, we also provide a 5G large scale multi-cell ns-3 simulation framework. The availability of the 5G/mmwave module for the ns-3 mmwave network simulator provides an opportunity to improve our understanding of the dynamic reasoning for adaptive clients in 5G multi-cell wireless scenarios. The purpose of our framework is to provide additional information (such as competing metrics for users connected to the same cell), thus providing otherwise unavailable information about the eNodeB environment and scheduling principle, to end user. Our framework, permits other researchers to investigate this interaction through the generation of their own synthetic datasets.
Conference Paper
In Long Term Evolution (LTE) systems, user equipment (UE) provides channel state information (CSI) to base station in terms of Channel Quality Indicator (CQI) feedback. However, in the conventional feedback model, Signal to Interference plus Noise Ratio (SINR) is assumed to be constant over a certain period. This assumption is generally not true, since wireless channels change over time. In this paper, a channel feedback model with robust SINR prediction is presented. The CQI difference statistics are also taken into consideration in the proposed channel feedback model. The simulation results show that the proposed model improves the accuracy of channel feedback information by using extrapolation when UE moves at low speed. Thus, no second-order SINR statistics are required as a-priori information.
Conference Paper
In the downlink of Long Term Evolution (LTE) systems, feedback and processing delays cause a mismatch between the current channel state and the Channel Quality Information (CQI) at the base station. This CQI aging leads to inaccurate channel adaptation and can, thus, highly degrade the cell capacity. To compensate for this performance loss, we study several CQI predictors under realistic delay and channel assumptions. Our results on cell throughput show that linear prediction with Stochastic Approximation provides at least the performance gains of the computationally more complex covariance-based linear predictors and Kalman filters. This surprising result points to Stochastic Approximation as a powerful and practical technique to increase downlink performance with limited channel knowledge.
Predicting Channel Quality Indicators for 5G Downlink Scheduling in a Deep Learning Approach
  • H Yin
H. Yin et al. "Predicting Channel Quality Indicators for 5G Downlink Scheduling in a Deep Learning Approach." ArXiv abs/2008.01000, 2020.
CQI Prediction via Hidden Markov Model for Link Adaptation in Ultra Reliable Low Latency Communications
  • M R-Mayiami
  • J Mohammadi
  • S Mandelli
  • A Weber
M. R-Mayiami, J. Mohammadi, S. Mandelli and A. Weber, "CQI Prediction via Hidden Markov Model for Link Adaptation in Ultra Reliable Low Latency Communications," 25th International ITG Workshop on Smart Antennas, 2021, pp. 1-5.