Conference Paper

A Comparison of AI-Based Throughput Prediction for Cellular Vehicle-To-Server Communication

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Nowadays, on-board sensor data is primarily used to detect nascent threats during automated driving. Since the range of this data is locally restricted, centralized server architectures are taken into consideration to alleviate challenges caused by highly automated driving at higher speeds. Therefore, a server accumulates this sensor data and provides aggregated information about the traffic situation utilizing mobile network-based vehicle to server communication. To schedule communication traffic on this fluctuating channel reliably, various approaches on throughput prediction are conducted. On one hand there are models based on aggregation depending on the position, e.g. connectivity maps. On the other hand there are traditional machine learning approaches, i.a. Support Vector Regression. This work implements the latter including OSM-based feature engineering and conducts a comprehensive comparison on the performance of these models utilizing a uniform dataset.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This data set can be created either from simulations [6]- [8] or from measurement campaigns, such as the works presented in the following. The authors of [9] and [10] compare different ML algorithms such as random forest (RF), support vector machine (SVM), and neural networks (NNs) regarding their performance of downlink (DL) throughput prediction. In [11], the throughput of 5G mmWave communication is predicted using gradient-boosted decision trees (GBDTs) and long shortterm memorys (LSTMs). ...
... The measurements took place in a test network, i.e., the base stations (BSs) as well as all connected user equipments (UEs) were under full control of the measurement team. This allowed to collect information like the cell load which is typically not available in previous works [9]- [11], [14]. Five BSs with two sectors each were distributed on the test field. ...
... This section quantifies the introduced sampling error (SE), as an accurate data basis is essential for precise ML-based QoS prediction. The evaluation is enabled by the high sampling frequency of the DMEs of 100 Hz, which is much higher than the values typically used in the literature for QoS prediction [9], [11]. ...
Conference Paper
Full-text available
The use of machine learning (ML) is often proposed for improving different performance metrics of wireless communication systems. Prominent examples under discussion for the evolution of cellular networks towards 6G include predictive quality of service and predictive radio resource management. For this, ML requires a vast amount of data to learn the underlying principles, which often can be considered a roadblock to the adoption of ML in commercial networks. This motivates to study how the data collection procedure can be improved. Even though radio environment correlations have been described extensively in the literature, their link to improving the adoption of ML algorithms is rarely studied. If the sampling frequency of physical layer metrics can be lowered due to the correlation structure of the radio environment-without impairing the suitability of the data for implementing different ML applications-less complex devices can be used, and processing can be designed more power-saving, which is particularly important for mobile devices. First, as an example, the correlation of reference signal received power values depending on the distance traveled by the vehicle is analyzed based on data from a measurement campaign. It reveals that the correlation of different radio environments can be used to optimize the data collection procedures. The main contribution of this work is to demonstrate that with our approach, the sampling frequency for collecting measurements can be strongly reduced if a small increase in the prediction error is tolerable.
... Even if the concept of SVRs is not new [88], it is still used e.g. in the domain of TPP as presented by Wei, Kawakami, Kanai, et al. [89]. Apart from this, also Schmid, Schneider, Höß, et al. [90] compare different learning algorithms e.g. Random Forests, Linear Regression and SVRs and come to the conclusion that SVRs are worth to be considered. ...
... Alternatively, the probability of a change of location can be estimated. A comparison between a grid based map and different LB methods was presented by Schmid, Schneider, Höß, et al. [90]. Here the authors take the future location as given, so the location prediction error in ignored, which proved to be a good starting point. ...
... Geo-based feature derivation Filtering Downsampling Encoding Shifting Feature Selection FIGURE 3.5: Visualisation of the preprocessing steps used to create the dataset for the prediction of network quality parameters. The preprocessing includes the feature engineering of geo attributes, the filtering and machine learning prepossessing steps and is adopted from pre-published results [90]. ...
Thesis
Full-text available
Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along. One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated.
... Overall, the presented categorization is far from complete and extensive but simultaneously shows how broad the topic of throughput prediction is. There are comprehensive surveys available that provide an in-depth analysis of various ML models presented and evaluated by the authors, e.g., [32,33]. Regarding the abovementioned criteria, in our work, we apply an LSTM ANN and implement a single-step, short-time prediction of 4 s. ...
Article
Full-text available
Predicting throughput is essential to reduce latency in time-critical services like video streaming, which constitutes a significant portion of mobile network traffic. The video player continuously monitors network throughput during playback and adjusts the video quality according to the network conditions. This means that the quality of the video depends on the player’s ability to predict network throughput accurately, which can be challenging in the unpredictable environment of mobile networks. To improve the prediction accuracy, we grouped the throughput trace into clusters taking into account the similarity of their mean and variance. Once we distinguished the similar trace fragments, we built a separate LSTM predictive model for each cluster. For the experiment, we used traffic captured from 5G networks generated by individual user equipment (UE) in fixed and mobile scenarios. Our results show that the prior grouping of the network traces improved the prediction compared to the global model operating on the whole trace.
... D. Minovski et al. [16] analyzed Support Vector Regression (SVR) for throughput prediction in different environmental scenarios. SVR, Linear Regression (LR), and Random Forest Regressor (RFR) methods are also presented in [17]. Deep neural network approaches such as LSTM are discussed in [18], [19]. ...
... In [31] and [32] the authors present a measurement campaign and evaluate several ML models regarding their performance for downlink (DL) throughput prediction. One limitation is that the authors measure in a public network and thus cannot consider features such as the cell load. ...
Article
Full-text available
As cellular networks evolve towards the 6th generation, machine learning is seen as a key enabling technology to improve the capabilities of the network. Machine learning provides a methodology for predictive systems, which, in turn, can make networks become proactive. This proactive behavior of the network can be leveraged to sustain, for example, a specific quality of service requirement. With predictive quality of service, a wide variety of new use cases, both safety- and entertainment-related, are emerging, especially in the automotive sector. Therefore, in this work, we consider maximum throughput prediction enhancing, for example, streaming or high-definition mapping applications. We discuss the entire machine learning workflow highlighting less regarded aspects such as the detailed sampling procedures, the in-depth analysis of the dataset characteristics, the effects of splits in the provided results, and the data availability. Reliable machine learning models need to face a lot of challenges during their lifecycle. We highlight how confidence can be built on machine learning technologies by better understanding the underlying characteristics of the collected data. We discuss feature engineering and the effects of different splits for the training processes, showcasing that random splits might overestimate performance by more than twofold. Moreover, we investigate diverse sets of input features, where network information proved to be most effective, cutting the error by half. Part of our contribution is the validation of multiple machine learning models within diverse scenarios. We also use explainable AI to show that machine learning can learn underlying principles of wireless networks without being explicitly programmed. Our data is collected from a deployed network that was under full control of the measurement team and covered different vehicular scenarios and radio environments.
... In [31] and [32] the authors present a measurement campaign and evaluate several ML models regarding their performance for Downlink (DL) throughput prediction. However, the authors measure in a public network and thus cannot consider features such as the cell load. ...
Preprint
Full-text available
As cellular networks evolve towards the 6th Generation (6G), Machine Learning (ML) is seen as a key enabling technology to improve the capabilities of the network. ML provides a methodology for predictive systems, which, in turn, can make networks become proactive. This proactive behavior of the network can be leveraged to sustain, for example, a specific Quality of Service (QoS) requirement. With predictive Quality of Service (pQoS), a wide variety of new use cases, both safety- and entertainment-related, are emerging, especially in the automotive sector. Therefore, in this work, we consider maximum throughput prediction enhancing, for example, streaming or HD mapping applications. We discuss the entire ML workflow highlighting less regarded aspects such as the detailed sampling procedures, the in-depth analysis of the dataset characteristics, the effects of splits in the provided results, and the data availability. Reliable ML models need to face a lot of challenges during their lifecycle. We highlight how confidence can be built on ML technologies by better understanding the underlying characteristics of the collected data. We discuss feature engineering and the effects of different splits for the training processes, showcasing that random splits might overestimate performance by more than twofold. Moreover, we investigate diverse sets of input features, where network information proved to be most effective, cutting the error by half. Part of our contribution is the validation of multiple ML models within diverse scenarios. We also use Explainable AI (XAI) to show that ML can learn underlying principles of wireless networks without being explicitly programmed. Our data is collected from a deployed network that was under full control of the measurement team and covered different vehicular scenarios and radio environments.
... GPS coordinates are mapped to this aligned route while applying additional consistency checks, e.g., filtering positions too far away from the route, or physically impossible heading deviations introduced by the inaccuracy of raw GPS. This transformation rectifies timings for each segment and further enables to augment additional OSM-based information, e.g., road segment IDs [14] or amenity characteristics [15]. ...
Chapter
Full-text available
Data-driven approaches will be a pivotal tool to interpret traffic data and to optimise operations to enable more efficient, individual, public transport. Whereas nowadays data remain a proprietary resource, Finland pioneered an open ecosystem. In this work, we present an architecture to acquire heterogeneous data sources and different data refinement strategies at the edge-level, such as a map-matching approach for inaccurate vehicle GPS traces. Finally, data quality monitoring at the cloud-level is highlighted by introducing and applying an Errors-to-Data Ratio (EDR) metric.
... In order to increase the performance and reliability of the UAV to server connection, QoS modeling and prediction need to be investigated in future work. Thereby, the experience in QoS prediction gained by previous projects, regarding vehicle to server communication in the automotive sector, will be used to accomplish these enhancements [22], [23]. ...
Preprint
Within this paper requirements for server to Unmanned Aerial Vehicle (UAV) communication over the mobile network are evaluated. It is examined, whether a reliable cellular network communication can be accomplished with the use of current Long Term Evolution network (LTE) technologies or if the 5th Generation network (5G) is indispensable. Moreover, enhancements on improving the channel quality on the UAV side are evaluated. Therefore, parameters like data rate, latency, message size and reliability for Command and Control (C&C) and application data are determined. Furthermore, possible improvements regarding interference mitigation in the up- and downlink of the UAV are discussed. For this purpose, results from publications of the 3rd Generation Partnership Project (3GPP) and from surveys regarding UAVs and mobile networks are presented. This work shows, that for C&C use cases like steering to waypoints, the latency and the data rate of the LTE network is sufficient, but in terms of reliability problems can occur. Furthermore, the usability of standard protocols for computer networks like the Transmission Control Protocol(TCP) and User Datagram Protocol (UDP) is discussed. There are also multimodal implementations of these protocols like MultiPath TCP (MPTCP) which can be adapted into the UAV’s communication system in order to increase reliability through multiple communication channels. Finally, applications for Long Range (LoRa) direct communication in terms of supporting the cellular network of the UAV are considered.
... Communication Communication provides information to decision, specifically, ego-vehicle localization from GNSS+INS fusion. The surrounding condition is also considered from intelligent infrastructure using Cellular technology, performing a link with virtual traffic lights through a mesh-network over Wireless Fidelity (WiFi) with Ad-Hoc network using Optimized Link State Routing Protocol (OLSR) [248,249]. ...
Thesis
Full-text available
In the last decade, an increasing trend towards automation of vehicles has arisen, creating a significant change in mobility, which will profoundly affect the people’s way of life, the logistics of goods, and other sectors dependent on transportation. The development of new driving technologies will cause a great impact on transportation services soon, having a remarkable effect on the economic, natural, and social environment. In the development of high driving automation in structured environments, both safety and comfort, as part of new driving functionalities, are not yet described in a standardized way. As testing methods are using more and more simulation techniques, the existing developments must be adapted to this process. For instance, as trajectory tracking technologies are essential enablers for highly automated vehicles, thorough verifications must be applied in related applications such as vehicle motion control and parameter estimation. Moreover, in-vehicle technologies must be robust enough to meet high safety requirements, improving redundancy to support a fail-safe operation. Considering the mentioned premises, this Ph.D. Thesis targets the design and implementation of a framework to achieve highly Automated Driving Systems (ADS) considering crucial aspects, such as real-time capability, robustness, operating range, and easy parameter tuning. Also, scalability is a key aspect, which allows covering a broad range of vehicle platforms, from two-seated cars up to full-size transit buses. To develop the contributions related to this work, a study of the current state of the art in high driving automation technologies is carried out. Then, a two-step method is proposed addressing the validation of both simulation vehicle models and ADS. Novel model-based predictive formulations are introduced to improve safety and comfort in the trajectory tracking process. Finally, malfunction scenarios are assessed to improve safety in urban settings, proposing a fallback strategy based on dead-reckoning to minimize risk conditions.
Article
Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. Nevertheless, prediction of Quality of Service parameters, particularly throughput, is quite a challenging task. In this survey, we provide an extensive insight into the literature on Transmission Control Protocol throughput prediction. The goal is to provide an overview of the used techniques and to elaborate on open aspects and white spots in this area. We assessed more than 35 approaches spanning from equation-based over various time smoothing to modern learning and location smoothing methods. In addition, different error functions for the evaluation of the approaches as well as publicly available recording tools and datasets are discussed. To conclude, we point out open challenges especially looking in the area of moving mobile network clients. The use of throughput prediction not only enables a more efficient use of the available bandwidth, the techniques shown in this work also result in more robust and stable communication.
Article
Throughput prediction is crucial for reducing latency in time-critical services. We study the attention-based LSTM model for predicting future throughput. First, we collected the TCP logs and throughputs in LTE networks and transformed them using CUBIC and BBR trace log data. Then, we use the sliding window method to create input data for the prediction model. Finally, we trained the LSTM model with an attention mechanism. In the experiment, the proposed method shows lower normalized RMSEs than the other method.
Conference Paper
Within this paper, requirements for server to Unmanned Aerial Vehicle (UAV) communication over the mobile network are evaluated. It is examined, whether a reliable cellular network communication can be accomplished with the use of current Long Term Evolution (LTE) network technologies, or, if the 5th Generation (5G) network is indispensable. Moreover, enhancements on improving the channel quality on the UAV-side are evaluated. Therefore, parameters like data rate, latency, message size and reliability for Command and Control (C&C) and application data are determined. Furthermore, possible improvements regarding interference mitigation in the up- and downlink of the UAV are discussed. For this purpose, results from publications of the 3rd Generation Partnership Project (3GPP) and from surveys regarding UAVs and mobile networks are presented. This work shows that, for C&C use cases like steering to waypoints, the latency and the data rate of the LTE network is sufficient, but in terms of reliability problems can occur. Furthermore, the usability of standard protocols for computer networks like the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) is discussed. There are also multimodal implementations of these protocols like MultiPath TCP (MPTCP) which can be adapted into the UAV's communication system in order to increase reliability through multiple communication channels. Finally, applications for Long Range (LoRa) direct communication in terms of supporting the cellular network of the UAV are considered.
Conference Paper
Mobile communication has become a part of everyday life and is considered to support reliability and safety in traffic use cases such as conditionally automated driving. Nevertheless, prediction of Quality of Service parameters, particularly throughput, is still a challenging task while on the move. Whereas most approaches in this research field rely on historical data measurements, mapped to the corresponding coordinates in the area of interest, this paper proposes a throughput prediction method that focuses on a location independent approach. In order to compensate the missing positioning information, mainly used for spatial clustering, our model uses low-level mobile network parameters, improved by additional feature engineering to retrieve abstracted location information, e. g., surrounding building size and street type. Thus, the major advantage of our method is the applicability to new regions without the prerequisite of conducting an extensive measurement campaign in advance. Therefore, we embed analysis results for underlying temporal relations in the design of different deep neuronal network types. Finally, model performances are evaluated and compared to traditional models, such as the support vector or random forest regression, which were harnessed in previous investigations.
Conference Paper
Full-text available
The throughput of a cellular network depends on a number of factors such as radio technology, limitations of device hardware (e.g., chipsets, antennae), physical layer effects (interference, fading, etc.), node density and demand, user mobility, and the infrastructure of Mobile Network Operators (MNO). Therefore, understanding and identifying the key factors of cellular network performance that affect end-users experience is a challenging task. We use a dataset collected using netradar, a platform that measures cellular network performance crowd- sourced from mobile user devices. Using this dataset we develop a methodology (a classifier using a machine learning approach) for understanding cellular network performance. We examine key characteristics of cellular networks related to throughput from the perspective of mobile user activity, MNO, smartphone models, link stability, location and time of day. We perform a network-wide correlation and statistical analysis to obtain a basic understanding of the influence of individual factors. We use a machine learning approach to identify the important features and to understand the relationship between different ones. These features are then used to build a model to classify the stability of cellular network based on the data reception characteristics of the user. We show that it is possible to classify reasons for network instability using minimal cellular network metrics with up to 90% of accuracy.
Conference Paper
Full-text available
High quality video streaming is increasingly popular among mobile users especially with the rise of high speed LTE networks. But despite the high network capacity of LTE, streaming videos may suffer from disruptions since the quality of the video depends on the network bandwidth, which in turn depends on the location of the user with respect to their cell tower, crowd levels, and objects like buildings and trees. Maintaining a good video playback experience becomes even more challenging if the user is moving fast in a vehicle, the location is changing rapidly and the available bandwidth fluctuates. In this paper we introduce GeoStream, a video streaming system that relies on the use of geostatistics to analyze spatio-temporal bandwidth. Data measured from users' streaming videos while they are commuting is collected in order to predict future bandwidth availability in unknown locations. Our approach investigates and leverages the relationship between the separation distance between sample bandwidth points, the time they were captured, and the semivariance, expressed by a variogram plot, to finally predict the future bandwidth at unknown locations. Using the datasets from GTube our experimental results show an improved performance.
Article
Full-text available
Predicting network throughput is important for network-aware applications. Network throughput depends on a number of factors, and many throughput prediction methods have been proposed. However, many of these methods are suffering from the fact that a distribution of traffic fluctuation is unclear and the scale and the bandwidth of networks are rapidly increasing. Furthermore, virtual machines are used as platforms in many network research and services fields, and they can affect network measurement. A prediction method that uses pairs of differently sized connections has been proposed. This method, which we call connection pair, features a small probe transfer using the TCP that can be used to predict the throughput of a large data transfer. We focus on measurements, analyses, and modeling for precise prediction results. We first clarified that the actual throughput for the connection pair is non-linearly and monotonically changed with noise. Second, we built a previously proposed predictor using the same training data sets as for our proposed method, and it was unsuitable for considering the above characteristics. We propose a throughput prediction method based on the connection pair that uses v-support vector regression and the polynomial kernel to deal with prediction models represented as a non-linear and continuous monotonic function. The prediction results of our method compared to those of the previous predictor are more accurate. Moreover, under an unstable network state, the drop in accuracy is also smaller than that of the previous predictor.
Article
Full-text available
Highly automated driving constitutes a temporary transfer of the primary driving task from the driver to the automated vehicle. In case of system limits, drivers take back control of the vehicle. This study investigates the effect of varying traffic situations and non-driving related tasks on the take-over process and quality. The experiment is conducted in a high-fidelity driving simulator. The standardized visual Surrogate Reference Task (SuRT) and the cognitive n-back Task are used to simulate the non-driving related tasks. Participants experience four different traffic situations. Results of this experiment show a strong influence of the traffic situations on the take-over quality in a highway setting, if the traffic density is high. The non-driving related tasks SuRT and the n-back Task show similar effects on the take-over process with a higher total number of collisions by the SuRT in the high density traffic situation.
Conference Paper
Full-text available
In this paper, the authors have demonstrated a principle to capture the strong correlation of location and WWAN network bandwidth in the form of bandwidth-road maps.The authors demonstrated how these maps can be used to intelligently schedule traffic in a multi-homed mobile network. Preliminary results show that this approach can effectively improve the user experience of on-board audio streaming.
Article
Throughput prediction is one of the promising techniques to improve the quality of service (QoS) and quality of experience (QoE) of mobile applications. To address the problem of predicting future throughput distribution accurately during the whole session, which can exhibit large throughput fluctuations in different scenarios (especially scenarios of moving user), we propose a history-based throughput prediction method that utilizes time series analysis and machine learning techniques for mobile network communication. This method is called the Hybrid Prediction with the Autoregressive Model and Hidden Markov Model (HOAH). Different from existing methods, HOAH uses Support Vector Machine (SVM) to classify the throughput transition into two classes, and predicts the transmission control protocol (TCP) throughput by switching between the Autoregressive Model (AR Model) and the Gaussian Mixture Model-Hidden Markov Model (GMM-HMM). We conduct field experiments to evaluate the proposed method in seven different scenarios. The results show that HOAH can predict future throughput effectively and decreases the prediction error by a maximum of 55.95% compared with other methods. Copyright © 2018 The Institute of Electronics, Information and Communication Engineers.
Article
The ever-increasing demand for seamless high-definition video streaming, along with the widespread adoption of the Dynamic Adaptive Streaming over HTTP (DASH) standard, has been a major driver of the large amount of research on bitrate adaptation algorithms. The complexity and variability of the video content and of the mobile wireless channel make this an ideal application for learning approaches. Here, we present D-DASH, a framework that combines Deep Learning and Reinforcement Learning techniques to optimize the Quality of Experience (QoE) of DASH. Different learning architectures are proposed and assessed, combining feed-forward and recurrent deep neural networks with advanced strategies. D-DASH designs are thoroughly evaluated against prominent algorithms from the state-of-the-art, both heuristic and learning-based, evaluating performance indicators such as image quality across video segments and freezing/rebuffering events. Our numerical results are obtained on real and simulated channel traces and show the superiority of D-DASH in nearly all the considered quality metrics. Besides yielding a considerably higher QoE, the D-DASH framework exhibits faster convergence to the rate-selection strategy than the other learning algorithms considered in the study. This makes it possible to shorten the training phase, making D-DASH a good candidate for client-side runtime learning.
Article
Predicting the expected throughput of TCP is important for several aspects such as e.g. determining handover criteria for future multihomed mobile nodes or determining the expected throughput of a given MPTCP subflow for load-balancing reasons. However, this is challenging due to time varying behavior of the underlying network characteristics. In this paper, we present a genetic-algorithm-based prediction model for estimating TCP throughput values. Our approach tries to find the best matching combination of mathematical functions that approximate a given time series that accounts for the TCP throughput samples using genetic algorithm. Based on collected historical datapoints about measured TCP throughput samples, our algorithm estimates expected throughput over time. We evaluate the quality of the prediction using different selection and diversity strategies for creating new chromosomes. Also, we explore the use of different fitness functions in order to evaluate the goodness of a chromosome. The goal is to show how different tuning on the genetic algorithm may have an impact on the prediction. Using extensive simulations over several TCP throughput traces, we find that the genetic algorithm successfully finds reasonable matching mathematical functions that allow to describe the TCP sampled throughput values with good fidelity. We also explore the effectiveness of predicting time series throughput samples for a given prediction horizon and estimate the prediction error and confidence.
Conference Paper
The ongoing success of smartphones and tablet computers, combined with the widespread deployment of cellular network infrastructure, has paved the way for ubiquitous Internet access. Access to mobile services has become a commodity for many commuters on public transport vehicles. On their daily trips to work and back, however, people often experience varying throughput rates due to the different capacities of network cells and the channel quality to the cell site. Links with reduced or no throughput are clearly unfavorable when users need to download large files or engage in synchronous communication activities. We thus introduce the notion of opportunistic personal bandwidth maps (OPBMs) in this paper. OPBMs allow the user to schedule activities with high throughput demand to parts of their journey where the bandwidth requirements are likely to be met. Users create their own OPBM by means of opportunistically monitoring their throughput during access to the cellular network and consolidating these individual measurements. Due to the opportunistic nature of our approach, no additional data transfers are required. Our measurements for more than 70 commutes show that the achievable throughput for road segments is highly variable across different trips. Still, the availability of OPBMs allows users to make decisions (e.g. to download a large file) when traveling along the segment with highest expected throughput.
Conference Paper
Vehicular communication using cellular networks is an increasingly important issue and provides additional functionality as well as services with highly topical information. Due to the mobility of vehicles, network coverage or current utilization, frequent changes in the communication properties occur, for example spontaneous significant changes in the available bandwidth. Not all applications can handle such delays or interruptions very well. Moreover, the communication scheduling is not optimal because communication changes come without any prior warning. In many cases it would be helpful if additional knowledge about the future properties is available. In this work we show how to exploit vehicles which go around collecting data about their current communication conditions and transfer the data to a central server database. On the server side we can make predictions about the future connectivity by taking into account various influences and the network dynamic. The vehicles can simultaneously request predictions for their surrounding area and benefit from the historical network data of other vehicles. Finally, the different vehicular applications are provided by such predictions and can be used in different ways. We describe the process of establishing such a Connectivity Map including the necessary steps to collect the data and to predict network properties to support vehicular application with additional context information. Thereby, we also investigate the accuracy of our Connectivity Map in an experimental evaluation scenario.
Conference Paper
The number of connected cars is growing continuously. Many car manufacturers offer systems, which provide online content to the passengers, mostly infotainment and information data. Although new wireless data communications technologies like LTE will offer much more performance, the increasing number of mobile data users in combination with higher individual data traffic will nullify the higher capacities. Mobile data bandwidth will remain a rare commodity. If next generations driver assistance systems should be supplied with data from central servers in time and according to their individual priorities, it is important to make optimal use of the available resources. Considering the changing channel conditions of the wireless links offers potential for improvements, since the channel quality has a direct influence on the utilization of the data channel. HSPA and LTE use the so called Channel Quality Indicator (CQI) as a measure of the channel conditions. Summarized in a map, server-side applications can use a priori knowledge of the location-dependent channel quality to optimize the data distribution to mobile clients. This paper describes the process of creating CQI maps for HSDPA networks, from the initial collection of raw data to the final validation of the proposed concept.
Planet dump retrieved from https://planet.osm.org
  • contributors