Conference Paper

A Deep Learning Approach for Location Independent Throughput Prediction

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Mobile communication has become a part of everyday life and is considered to support reliability and safety in traffic use cases such as conditionally automated driving. Nevertheless, prediction of Quality of Service parameters, particularly throughput, is still a challenging task while on the move. Whereas most approaches in this research field rely on historical data measurements, mapped to the corresponding coordinates in the area of interest, this paper proposes a throughput prediction method that focuses on a location independent approach. In order to compensate the missing positioning information, mainly used for spatial clustering, our model uses low-level mobile network parameters, improved by additional feature engineering to retrieve abstracted location information, e. g., surrounding building size and street type. Thus, the major advantage of our method is the applicability to new regions without the prerequisite of conducting an extensive measurement campaign in advance. Therefore, we embed analysis results for underlying temporal relations in the design of different deep neuronal network types. Finally, model performances are evaluated and compared to traditional models, such as the support vector or random forest regression, which were harnessed in previous investigations.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, the throughput of cellular networks depends on a complex interaction between lower layer parameters, such as signal strength, frequency of handovers, User Equipment (UE) speed, location, neighbouring environment, load of the connected base-station, etc, as well as radio resource allocation algorithms, and upper layer parameters, such as the historical network throughput [4,14]. To capture the effect of all these network features on the throughput prediction algorithm, several works have suggested the use of Machine Learning (ML) and Deep Learning (DL) based prediction models [4,8,10,13,14,19,20,21]. ...
... Long Short Term Memory (LSTM), a variant of recurrent neural networks, has been proposed in [21] for a location independent throughput prediction approach. The authors in [19] have also explored throughput prediction using LSTM, along with other ML algorithms, viz. ...
... Furthermore, it has also been reported that the ability of GNetTrackPro to record all the metrics is different in different mobile phones and depends on the chipset manufacturer [13]. Existing works [4,8,10,13,14,19,20,21] have not accounted for modelling and subsequently cancelling the effect of this measurement noise. In this work, we hypothesize that if the error in measuring the throughput can be considered and cancelled, then throughput prediction can be done using considerably simpler models than the complicated ML and DL models discussed above. ...
Preprint
Full-text available
Throughput Prediction is one of the primary preconditions for the uninterrupted operation of several network-aware mobile applications, namely video streaming. Recent works have advocated using Machine Learning (ML) and Deep Learning (DL) for cellular network throughput prediction. In contrast, this work has proposed a low computationally complex simple solution which models the future throughput as a multiple linear regression of several present network parameters and present throughput. It then feeds the variance of prediction error and measurement error, which is inherent in any measurement setup but unaccounted for in existing works, to a Kalman filter-based prediction-correction approach to obtain the optimal estimates of the future throughput. Extensive experiments across seven publicly available 5G throughput datasets for different prediction window lengths have shown that the proposed method outperforms the baseline ML and DL algorithms by delivering more accurate results within a shorter timeframe for inferencing and retraining. Furthermore, in comparison to its ML and DL counterparts, the proposed throughput prediction method is also found to deliver higher QoE to both streaming and live video users when used in conjunction with popular Model Predictive Control (MPC) based adaptive bitrate streaming algorithms.
... They indicate that SVR with different parameters, such as throughput and RSSI, outperforms a single parameter throughput/RSSI. Schmid et al. [26] employed two layered LSTM. They compared the performance of LSTM with RF and feed-forward neural networks. ...
... However, the data sets used in existing studies are collected from 34 and 4G networks [29], [30], [24]. There are also a few studies found that are used for throughput prediction in cellular networks, but it is pertinent that the data set used in these studies is also collected from 3G, 4G, and LTE networks [26], [28], [31]. Recently, Narayanan et al. [27] conducted a study to collect a 5G data set. ...
Conference Paper
5G technology has ushered in a new era of cellular networks characterized by unprecedented speeds and connectivity. However, these networks' dynamic and complex nature presents significant challenges in network management and Quality of Service (QoS) assurance. In this context, accurate throughput prediction is essential for optimizing network resources, improving traffic management, and enhancing user experiences. This study presents novel deep learning approaches utilizing Long Short-Term Memory (LSTM), Bi-directional LSTM (BiLSTM), and Artificial Neural Networks (ANN) to predict the throughput. The methodology achieves exceptional performance, surpassing existing methods. The motive behind leveraging deep learning algorithms is their exceptional ability to capture temporal dependencies and patterns within time-series data, which is intrinsic to network traffic. By employing these models, we can forecast network throughput with high precision, facilitating proactive resource allocation and congestion avoidance. Our approach maintains high QoS and supports cost efficiency and adaptive network maintenance. The BiLSTM and LSTM model's adaptability and learning capabilities make it well-suited for the ever-evolving 5G landscape, where user demands and network conditions fluctuate rapidly. This study demonstrates the technical feasibility and benefits of using BiLSTM and LSTM for overall throughput prediction. It highlights the broader implications for the future of 5G network management and optimization.
... Their study explored various machine learning model types, including RF, SVM, and Neural Networks (NN), similar to the approach in [25]. Similarly, Josef Schmid et al. [29] presented deep learning models for throughput prediction using RNN. Deep learning techniques for throughput prediction are also discussed by the authors in [30], [31]. ...
... LSTM is used in the literature for throughput prediction [48] [29] [49] [16]. The choice of LSTM is mainly based on the time series data used for this application. ...
Article
Full-text available
The O-RAN architectural framework enables the application of AI/ML techniques for traffic steering and load balancing. Indeed, an effective steering technique is crucial to avoiding ping-pong and radio link failure. Limited observability and network complexity make it challenging to understand individual user needs. Consequently, traffic steering methods struggle to make optimal decisions, resulting in performance degradation due to unnecessary handovers. Motivated by this, we present an xApp for the RAN intelligence controller (RIC) for user equipment (UE) steering to ensure an even load distribution among cells while maintaining an acceptable throughput level. We propose an ML-aided traffic steering technique. The proposed method comprises three phases: UE classification, downlink (DL) throughput prediction, and a traffic steering (TS) technique. A support vector machine (SVM) is used for UE classification, followed by cell throughput prediction using ensemble Long Short-Term Memory (E-LSTM). The TS algorithm uses the information from the ML models to initiate handovers (HO). The SVM model identifies UEs with low throughput, while the E-LSTM predicts cell DL throughput to provide information about potential target cells for these UEs. Experimental results demonstrate that the proposed method achieves an even load distribution of UEs in 60.25% of the cells with few handovers, while also significantly improving UE throughput.
... However, the DL models require a large amount of diverse training data. Various recent works [6,24,25,31] have explored deep learning models for cellular throughput prediction. In [6,25], the authors have compared the performance of ML and DL-based throughput prediction algorithms. ...
... A location-independent throughput prediction approach using LSTM Recurrent Neural Networks (RNN) is proposed in [31]. It shows that the selection of the model parameters, like the 'lag' of LSTM, significantly affects the algorithm's performances. ...
Preprint
Recent advancement in the ultra-highspeed cellular networks has pivoted the development of various next-generation applications, demanding precise monitoring of the underlying network conditions to adapt their configurations for the best user experience. Downlink throughput prediction using machine or deep learning techniques is one of the significant aspects in this direction. However existing works are limited in scope as the prediction models are primarily trained using the data from a closed network under a fixed cellular operator. Precisely, these models are not transferable across networks of different operators because of the vast variability in the underlying network configurations across operators, device specifications, and devices' responsiveness towards an operator's network. With the help of real network data, we show in this paper that the existing models get invalidated when trained with the data from one network and then applied to a different network. Finally we propose FedPut, utilizing Federated Learning (FL) to develop the throughput prediction model. We develop a novel approach called Cross-Technology FL (CTFL) that allows distributed model training over the user equipment by capturing throughput variations based on devices' sensitivity towards the corresponding network configurations. Rigorous evaluations show that FedPut outperforms various standard baseline algorithms. Subsequently, we also analyze the performance of FedPut over a network-aware ABR video streaming application. Such application-specific analysis also shows that FedPut reduces the variation in the rebuffering time of streaming videos, thereby improving the QoE compared to other baselines.
... In the context of NQP prediction, this could be used for performing a forecast in a location, where no data was previously recorded. Since such a location independent prediction is not possible with LS methods and actually not in depth studied on LB based approaches [40], it should therefore be verified whether these models can be used for location independent prediction. ...
... As shown by Schmid, Schneider, Höß, et al. in [40], an intensive investigation of the temporal correlation between throughput and other network parameters can be used as part of the feature selection process. The benefits of this process are also described by Koprinska, Rana, and Agelidis [199]. ...
Thesis
Full-text available
Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along. One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated.
... Therefore, vehicular speed constitutes a critical parameter for the accuracy of the QoS prediction. Several measurement campaigns have been conducted with vehicles traveling up to 100 km/h (e.g., [6], [14]). While some authors have investigated the impact of mobility on QoS metrics such as throughput (see, e.g., [8]), the influence of vehicular velocity and the sampling frequency on the feature distributions is still largely unknown. ...
... [11], [18]) use different traffic patterns and protocols (Transmission Control Protocol (TCP), User Datagram Protocol (UDP)) to account for different use cases, the effects of Vehicle-to-Network (V2N) and Vehicleto-Network-to-Vehicle (V2N2V) communication have not yet been studied together in one measurement campaign. Measurements that capture more dynamic and complex environments can provide a more in-depth understanding of the applicability of ML and its capability to generalize on different radio environments, compared to smaller-scale testbeds, as in [6], [14]. As there is a relative lack of publicly available data, studies considering QoS prediction with multiple UEs, e.g., [19], [20], typically resort to simulations. ...
... At a high level, the existing researches conducted between the year 2019 to 2022 as presented in Table 1 demonstrate a serious need for location-independent approach in HAR and/or AAL as great strides have been made towards this end. In particular, [Schmid et al., 2019] proposed a throughput prediction method that could be applied to new regions without the prerequisite of conducting an extensive measurement of campaign in advance. This is a progressive move considering the heterogeneity of ICT infrastructure, configurations, and system components amongst others. ...
... SVR, Linear Regression (LR), and Random Forest Regressor (RFR) methods are also presented in [17]. Deep neural network approaches such as LSTM are discussed in [18], [19]. ...
... In [31] and [32] the authors present a measurement campaign and evaluate several ML models regarding their performance for downlink (DL) throughput prediction. One limitation is that the authors measure in a public network and thus cannot consider features such as the cell load. ...
Article
Full-text available
As cellular networks evolve towards the 6th generation, machine learning is seen as a key enabling technology to improve the capabilities of the network. Machine learning provides a methodology for predictive systems, which, in turn, can make networks become proactive. This proactive behavior of the network can be leveraged to sustain, for example, a specific quality of service requirement. With predictive quality of service, a wide variety of new use cases, both safety- and entertainment-related, are emerging, especially in the automotive sector. Therefore, in this work, we consider maximum throughput prediction enhancing, for example, streaming or high-definition mapping applications. We discuss the entire machine learning workflow highlighting less regarded aspects such as the detailed sampling procedures, the in-depth analysis of the dataset characteristics, the effects of splits in the provided results, and the data availability. Reliable machine learning models need to face a lot of challenges during their lifecycle. We highlight how confidence can be built on machine learning technologies by better understanding the underlying characteristics of the collected data. We discuss feature engineering and the effects of different splits for the training processes, showcasing that random splits might overestimate performance by more than twofold. Moreover, we investigate diverse sets of input features, where network information proved to be most effective, cutting the error by half. Part of our contribution is the validation of multiple machine learning models within diverse scenarios. We also use explainable AI to show that machine learning can learn underlying principles of wireless networks without being explicitly programmed. Our data is collected from a deployed network that was under full control of the measurement team and covered different vehicular scenarios and radio environments.
... In [31] and [32] the authors present a measurement campaign and evaluate several ML models regarding their performance for Downlink (DL) throughput prediction. However, the authors measure in a public network and thus cannot consider features such as the cell load. ...
Preprint
Full-text available
As cellular networks evolve towards the 6th Generation (6G), Machine Learning (ML) is seen as a key enabling technology to improve the capabilities of the network. ML provides a methodology for predictive systems, which, in turn, can make networks become proactive. This proactive behavior of the network can be leveraged to sustain, for example, a specific Quality of Service (QoS) requirement. With predictive Quality of Service (pQoS), a wide variety of new use cases, both safety- and entertainment-related, are emerging, especially in the automotive sector. Therefore, in this work, we consider maximum throughput prediction enhancing, for example, streaming or HD mapping applications. We discuss the entire ML workflow highlighting less regarded aspects such as the detailed sampling procedures, the in-depth analysis of the dataset characteristics, the effects of splits in the provided results, and the data availability. Reliable ML models need to face a lot of challenges during their lifecycle. We highlight how confidence can be built on ML technologies by better understanding the underlying characteristics of the collected data. We discuss feature engineering and the effects of different splits for the training processes, showcasing that random splits might overestimate performance by more than twofold. Moreover, we investigate diverse sets of input features, where network information proved to be most effective, cutting the error by half. Part of our contribution is the validation of multiple ML models within diverse scenarios. We also use Explainable AI (XAI) to show that ML can learn underlying principles of wireless networks without being explicitly programmed. Our data is collected from a deployed network that was under full control of the measurement team and covered different vehicular scenarios and radio environments.
... Instead of tabular data regression and classification solutions covered so far, in [5], the authors propose to predict throughput using both historical data and influential factors within deep recurrent neural networks, without considering the spatial dependencies among cells. Rather than estimating the throughput directly, Signal-to-Interference-and-Noise-Ratio is predicted using auto-regressive artificial neural networks in [6] to increase the overall throughput of within 5G networks considerably. ...
Preprint
This study presents a general machine learning framework to estimate the traffic-measurement-level experience rate at given throughput values in the form of a Key Performance Indicator for the cells on base stations across various cities, using busy-hour counter data, and several technical parameters together with the network topology. Relying on feature engineering techniques, scores of additional predictors are proposed to enhance the effects of raw correlated counter values over the corresponding targets, and to represent the underlying interactions among groups of cells within nearby spatial locations effectively. An end-to-end regression modeling is applied on the transformed data, with results presented on unseen cities of varying sizes.
... This includes control of traffic lights and a trajectory or route selection for the automated fleet. Secondary tasks, such as networking and computation problems, are tackled, comprising resource management for V2X communication [8] and mobility-aware edge computing offloading [9]. ...
Chapter
Full-text available
This introductory article opens the section on "Applications of AI in Transportation Industry", giving a broad overview of the latest AI technologies in the transportation industry, with an additional focus on the developments enabling automated Mobility-as-a-Service (MaaS). It presents future capabilities and opportunities for AI, together with covering state-of-the-art Intelligent Transport Systems (ITS) trends, including advancements on the vehicle, infrastructure, and management level. Finally, the article outlines the two papers included in this section, highlighting concepts and challenges of using AI for automated, optimised, and individual passenger transport.
... In order to increase the performance and reliability of the UAV to server connection, QoS modeling and prediction need to be investigated in future work. Thereby, the experience in QoS prediction gained by previous projects, regarding vehicle to server communication in the automotive sector, will be used to accomplish these enhancements [22], [23]. ...
Preprint
Within this paper requirements for server to Unmanned Aerial Vehicle (UAV) communication over the mobile network are evaluated. It is examined, whether a reliable cellular network communication can be accomplished with the use of current Long Term Evolution network (LTE) technologies or if the 5th Generation network (5G) is indispensable. Moreover, enhancements on improving the channel quality on the UAV side are evaluated. Therefore, parameters like data rate, latency, message size and reliability for Command and Control (C&C) and application data are determined. Furthermore, possible improvements regarding interference mitigation in the up- and downlink of the UAV are discussed. For this purpose, results from publications of the 3rd Generation Partnership Project (3GPP) and from surveys regarding UAVs and mobile networks are presented. This work shows, that for C&C use cases like steering to waypoints, the latency and the data rate of the LTE network is sufficient, but in terms of reliability problems can occur. Furthermore, the usability of standard protocols for computer networks like the Transmission Control Protocol(TCP) and User Datagram Protocol (UDP) is discussed. There are also multimodal implementations of these protocols like MultiPath TCP (MPTCP) which can be adapted into the UAV’s communication system in order to increase reliability through multiple communication channels. Finally, applications for Long Range (LoRa) direct communication in terms of supporting the cellular network of the UAV are considered.
... By exploiting the telemetry data collected from the router we can predict the bandwidth of the connection and decide whether it is reliable to send the video stream to the edge. There are several research work which show the possibility to predict the current connection bandwidth based on parameters like RSRP, RSRQ, and historic throughput [7], [8], [9]. In the case that the bandwidth is high enough, the stream can be transmitted to the edge premises for faster inference, otherwise the edge will decide to push the inference on the device itself(step 7 in Figure 3), resulting in slower inference time. ...
Preprint
Full-text available
As data being produced by IoT applications continues to explode, there's a growing need to bring computing power closer to the source of the data to meet the response-time, power-consumption and cost goals of performance-critical applications like Industrial Internet of Things (IIoT), Automated Driving, Medical Imaging or Surveillance among others. This paper proposes a FPGA-based data collection and utilization framework that allows runtime platform and application data to be sent to an edge and cloud system via data collection agents running close to the platform. Agents are connected to a cloud system able to train AI models to improve overall energy efficiency of an AI application executed on a FPGA-based edge platform. In the implementation part we show that it is feasible to collect relevant data from an FPGA platform, transmit the data to a cloud system for processing and receiving feedback actions to execute an edge AI application energy efficiently. As future work we foresee the possibility to train, deploy and continuously improve a base model able to efficiently adapt the execution of edge applications.
... Such applications depend heavily on the quality of the audio streams of speech to correctly predict the emotions. In streaming applications, there are a variety of factors that could result in lower quality of data received, like lower data rate and packet loss [1], or varying throughput in mobile communication [2]. In such applications, any issue like this that might happen, would cause a drop in the input streams which might lead to severe degradation in the performance of the application. ...
Preprint
In applications that use emotion recognition via speech, frame-loss can be a severe issue given manifold applications, where the audio stream loses some data frames, for a variety of reasons like low bandwidth. In this contribution, we investigate for the first time the effects of frame-loss on the performance of emotion recognition via speech. Reproducible extensive experiments are reported on the popular RECOLA corpus using a state-of-the-art end-to-end deep neural network, which mainly consists of convolution blocks and recurrent layers. A simple environment based on a Markov Chain model is used to model the loss mechanism based on two main parameters. We explore matched, mismatched, and multi-condition training settings. As one expects, the matched setting yields the best performance, while the mismatched yields the lowest. Furthermore, frame-loss as a data augmentation technique is introduced as a general-purpose strategy to overcome the effects of frame-loss. It can be used during training, and we observed it to produce models that are more robust against frame-loss in run-time environments.
Article
Throughput Prediction is one of the primary preconditions for the uninterrupted operation of several network-aware mobile applications, namely video streaming. Recent works have advocated using Machine Learning (ML) and Deep Learning (DL) for cellular network throughput prediction. In contrast, this work has proposed a low computationally complex simple solution which models the future throughput as a multiple linear regression of several present network parameters and present throughput. It then feeds the variance of prediction error and measurement error, which is inherent in any measurement setup but unaccounted for in existing works, to a Kalman filter-based prediction-correction approach to obtain the optimal estimates of the future throughput. Extensive experiments across seven publicly available 5G throughput datasets for different prediction window lengths have shown that the proposed method outperforms the baseline ML and DL algorithms by delivering more accurate results within a shorter timeframe for inferencing and retraining. Furthermore, in comparison to its ML and DL counterparts, the proposed throughput prediction method is also found to deliver higher QoE to both streaming and live video users when used in conjunction with popular Model Predictive Control (MPC) based adaptive bitrate streaming algorithms.
Article
Throughput prediction is crucial for reducing latency in time-critical services. We study the attention-based LSTM model for predicting future throughput. First, we collected the TCP logs and throughputs in LTE networks and transformed them using CUBIC and BBR trace log data. Then, we use the sliding window method to create input data for the prediction model. Finally, we trained the LSTM model with an attention mechanism. In the experiment, the proposed method shows lower normalized RMSEs than the other method.
Conference Paper
Within this paper, requirements for server to Unmanned Aerial Vehicle (UAV) communication over the mobile network are evaluated. It is examined, whether a reliable cellular network communication can be accomplished with the use of current Long Term Evolution (LTE) network technologies, or, if the 5th Generation (5G) network is indispensable. Moreover, enhancements on improving the channel quality on the UAV-side are evaluated. Therefore, parameters like data rate, latency, message size and reliability for Command and Control (C&C) and application data are determined. Furthermore, possible improvements regarding interference mitigation in the up- and downlink of the UAV are discussed. For this purpose, results from publications of the 3rd Generation Partnership Project (3GPP) and from surveys regarding UAVs and mobile networks are presented. This work shows that, for C&C use cases like steering to waypoints, the latency and the data rate of the LTE network is sufficient, but in terms of reliability problems can occur. Furthermore, the usability of standard protocols for computer networks like the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) is discussed. There are also multimodal implementations of these protocols like MultiPath TCP (MPTCP) which can be adapted into the UAV's communication system in order to increase reliability through multiple communication channels. Finally, applications for Long Range (LoRa) direct communication in terms of supporting the cellular network of the UAV are considered.
Chapter
Teleoperated Driving, where a human driver controls a vehicle remotely, has the ability to be a key technology for the introduction of autonomous vehicles in everyday’s traffic scenarios. Already existing infrastructure like cellular networks have to be used to allow for an efficient use of such a system. Remote control is a sensitive subject and has high demands on security and, based on the fact that individuals are driven remotely, also on privacy. To take care of security and privacy, this paper introduces the minimal set of vehicle features, that are required for Teleoperated Driving. It also discusses a way of setting up a secure connection with valid and trusted remote operators, that can be selected taking into account various parameters. Involved parties are explained in detail. To allow for traceability, e.g. in case of an accident, by keeping a high level of privacy, a logging concept is introduced. Overall, this paper presents an initial approach to build a teleoperated system considering security and privacy as key factors, which can be used to build real-world systems.
Conference Paper
Full-text available
High quality video streaming is increasingly popular among mobile users especially with the rise of high speed LTE networks. But despite the high network capacity of LTE, streaming videos may suffer from disruptions since the quality of the video depends on the network bandwidth, which in turn depends on the location of the user with respect to their cell tower, crowd levels, and objects like buildings and trees. Maintaining a good video playback experience becomes even more challenging if the user is moving fast in a vehicle, the location is changing rapidly and the available bandwidth fluctuates. In this paper we introduce GeoStream, a video streaming system that relies on the use of geostatistics to analyze spatio-temporal bandwidth. Data measured from users' streaming videos while they are commuting is collected in order to predict future bandwidth availability in unknown locations. Our approach investigates and leverages the relationship between the separation distance between sample bandwidth points, the time they were captured, and the semivariance, expressed by a variogram plot, to finally predict the future bandwidth at unknown locations. Using the datasets from GTube our experimental results show an improved performance.
Conference Paper
Nowadays, on-board sensor data is primarily used to detect nascent threats during automated driving. Since the range of this data is locally restricted, centralized server architectures are taken into consideration to alleviate challenges caused by highly automated driving at higher speeds. Therefore, a server accumulates this sensor data and provides aggregated information about the traffic situation utilizing mobile network-based vehicle to server communication. To schedule communication traffic on this fluctuating channel reliably, various approaches on throughput prediction are conducted. On one hand there are models based on aggregation depending on the position, e.g. connectivity maps. On the other hand there are traditional machine learning approaches, i.a. Support Vector Regression. This work implements the latter including OSM-based feature engineering and conducts a comprehensive comparison on the performance of these models utilizing a uniform dataset.
A comparison of AI based throughput predication for moving mobile networks
  • J Schmid
  • M Schneider
  • A Höß
  • B Schuller
Predicting Computer Network Traffic: A Time Series Forecasting Approach Using DWT, ARIMA and RNN
  • R Madan
  • P Mangipudi