Conference Paper

Predicting strongest cell on secondary carrier using primary carrier data

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To reduce the signalling overhead for inter-frequency user measurement, machine learning (ML) has a potential to play a major role; specifically, it can learn from network data and identify patterns useful for predicting the strongest cell (in terms of RSRP) on a secondary frequency for a user without additional inter-frequency user measurements. Ryden et al. [1] and Zhohov et al. [2] have considered this prediction task as a multi-class classification problem, where the serving cell predicts the strongest (or the best) cell on a secondary carrier for a user based on the available measurements on the serving carrier cells without additional signalling required from the users. Zhohov et al. [2] also discussed the advantages of replacing traditional methods with ML-based predictions to reduce the interruption time due to the delay of the handover procedures; also, ML-based solutions reduce the signalling overhead by avoiding the inter-frequency user measurements. ...
... Zhohov et al. [2] also discussed the advantages of replacing traditional methods with ML-based predictions to reduce the interruption time due to the delay of the handover procedures; also, ML-based solutions reduce the signalling overhead by avoiding the inter-frequency user measurements. It is noteworthy that in both [1] and [2], coverage predictions are conducted centrally on the network side. This approach excludes the use of sensitive user equipment (UE) information, such as their geographical locations, for the training purposes. ...
... This solution, however, does not alleviate the privacy concerns related to UE location as it depends on sharing features of the localization pilot signals received by the UEs, which may still be utilized by the central server to recover UEs' locations in near line-of-sigh propagation conditions. In contrast, Haliloglu et al. [4] chose to leverage UE location data for RSRP prediction, a valuable component that can be afterwards used for coverage prediction as in [1,2] while protecting the privacy of the training data. It adopts a distributed and privacy-preserving ML approach, differential privacy (DP)-based Federated Learning (FL) [5,6,7] to allow distributed user devices to collaboratively train a global machine learning model without sharing training data. ...
Preprint
Full-text available
In 5G cellular networks, Machine Learning (ML) can be exploited to predict if a user equipment (UE) is in the coverage area of a neighbouring cell. This could improve crucial cellular network functionalities, such as handovers, interference mitigation and carrier aggregation. In this paper, we study the enhancement of UEs' privacy in a Differentially Private-Federated Learning (DP-FL) scheme relying on the sampled Gaussian mechanism, assuming honest-but-curious threat model. With this technique, the UE's privacy is protected by perturbing the averaged updates conducted at the server; also, the usage of client subsampling results in an amplified privacy and a reduced overhead in terms of communication. We demonstrate that the models trained with our approach can achieve better privacy-utility tradeoff than previous works can. In addition, we conduct membership inference attack to study the factors that impact the empirical privacy protection to the training data. We make a novel observation that suggests that for coverage prediction task, larger datasets and/or smaller ML models would provide stronger empirical privacy protection to training data. Beyond the task we consider, this observation could be a useful insight for dataset curation or model architecture selection in other domains and warrants additional investigation.
... However, the proposed method does not address the inter-frequency HO problem. In [4], authors propose a ML based approach for handling inter-frequency HO problem. The goal of this technique is to predict the strongest cell on the secondary carrier based on primary carrier data. ...
... In this method, UE still have to do DL-measurements and report them to the BS resulting in large signaling overhead. Similar to [4], authors in [6] proposed a neural network (NN) based approach to anticipate the HO and blind spots over a Wi-Fi network. In this approach NN considers the past M samples of received signal strength indicator (RSSI) and predicts HO and blind spots for the UE. ...
... RSSI is measured at the UE leading to faster battery draining of a UE. Motivated by the idea of [4] and considering the advantages that is discussed in the previous section, we propose a ML approach for predicting the existence or nonexistence of secondary carrier coverage based on the already available UL reference signals on primary carrier. ...
Preprint
A typical handover problem requires sequence of complex signaling between a UE, the serving, and target base station. In many handover problems the down link based measurements are transferred from a user equipment to a serving base station and the decision on handover is made on these measurements. These measurements together with the signaling between the user equipment and the serving base station is computationally expensive and can potentially drain user equipment battery. Coupled with this, the future networks are densely deployed with multiple frequency layers, rendering current handover mechanisms sub-optimal, necessitating newer methods that can improve energy efficiency. In this study, we will investigate a ML based approach towards secondary carrier prediction for inter-frequency handover using the up-link reference signals.
... Survival models are widely used when the response is subject to censoring, however, less effort has been put into modeling of censored predictors. Censoring due to a lower limit of detection is common in data measured with some instrument not having precision enough to detect small values, such as for instance biomedical data (Paxton et al. 1997;Hughes 1999;Lyles, Lyles, and Taylor 2000), or signal detection (Ryden et al. 2018). ...
... We use the cell-wise signal strengths of the current frequency as covariates and the maximum of an alternative frequency as the dependent variable. We use the maximum in this way as we imagine a scenario where we are interested in the potential gain of switching to the alternative frequency without having to measure if not needed, as in Ryden et al. (2018) and Svahn et al. (2019). All covariates in the scenario are subject to censoring. ...
Article
Full-text available
Efficient modeling of censored data, that is, data which are restricted by some detection limit or truncation, is important for many applications. Ignoring the censoring can be problematic as valuable information may be missing and restoration of these censored values may significantly improve the quality of models. There are many scenarios where one may encounter censored data: survival data, interval-censored data or data with a lower limit of detection. Strategies to handle censored data are plenty, however, little effort has been made to handle censored data of high dimension. In this article, we present a selective multiple imputation approach for predictive modeling when a larger number of covariates are subject to censoring. Our method allows for iterative, subject-wise selection of covariates to impute in order to achieve a fast and accurate predictive model. The algorithm furthermore selects values for imputation which are likely to provide important information if imputed. In contrast to previously proposed methods, our approach is fully nonparametric and therefore, very flexible. We demonstrate that, in comparison to previous work, our model achieves faster execution and often comparable accuracy in a simulated example as well as predicting signal strength in radio network data. Supplementary materials for this article are available online.
... In chemistry or bio-applications, the true value may not be possible to observe due to insensitive or inaccurate measuring equipment [16,24,27]. In signal processing, the signals may become undetectable below a certain limit or may be undetectable due to interference issues from other signals [29], [8]. ...
... One such example is when the radio strength on a secondary carrier frequency is predicted based on radio measurements of the primary carrier frequency. In that use case the device only performs inter-frequency measurements when it has high probability of being in the coverage of the secondary carrier [6] . ...
Preprint
Machine learning (ML) is an important component for enabling automation in Radio Access Networks (RANs). The work on applying ML for RAN has been under development for many years and is now also drawing attention in 3GPP and Open-RAN standardization fora. A key component of multiple features, also highlighted in the recent 3GPP specification work, is the use of mobility, traffic and radio channel prediction. These types of predictions form the intelligence enablers to leverage the potentials for ML for RAN, both for current and future wireless networks. This paper provides an overview with evaluation results of current applications that utilize such intelligence enablers, we then discuss how those enablers likely will be a cornerstone for emerging 6G use cases such as wireless energy transmission.
... They are usually changing between the wireless local area network (WLAN), universal mobile telecommunications system (UMTS), long term evolution (LTE), among others. Multiple proposals use artificial intelligence techniques, such as artificial neural networks (ANN), fuzzy logic (FL), genetic algorithms (GA), decision trees (DT), among others [10][11][12][13]. In [14], the authors proposed an algorithm which optimizes the feature boundary of deep convolutional neural networks (CNN) in order to reduce the overfitting problem, and this is a strategy to deal with the two-stage training process. ...
Article
Full-text available
In recent years, modern technology has been increasing, and this has grown a derivate in big challenges related to the network and application infrastructures. New devices have been providing more high functionalities to users than ever before; however, these devices depend on a high functionality of network in order to ensure a correct functioning ability over applications. This is essential for mobile networking systems to evolve in order to meet the future requirements of capacity, coverage, and data rate. In addition, when a network problem happens, it could be converted into somethingmore disastrous and difficult to solve. A crucial point is the network physical change and the difficulties, such as loss continuity of services and the decision to select the future network to be connected. In this article, a new framework is proposed to forecast a future network to be connected through a mobile node in WLAN environments. The proposed framework considers a decision-making process based on five classifiers and the user’s position and acceleration data in order to anticipate the network change, reaching up to 96.75% accuracy in predicting the connection of this future network. In this way, an early change of network is obtained without packet and time loss during the network change.
... We extend our previous work that was based on the prediction of the strongest cell on a secondary frequency layer [6], firstly, by including the real data that is known to have higher variance compared to the data typically produced from simulators and secondly, by providing results not only for a single trained model based on a single data set but for data sets collected from the multiple base stations. This improves confidence in the result compared to the literature that typically reports models for one or very few wireless nodes [7]- [9]. ...
... The idea of inter-frequency HO from a macro-cell to a nonco-located high frequency cell with a much lower footprint is presented in [96]. The authors use the Random Forest classification approach and also presented a use case of load balancing by which an efficient resource utilization for the static users can be achieved. ...
Article
Full-text available
The exponential rise in mobile traffic originating from mobile devices highlights the need for making mobility management in future networks even more efficient and seamless than ever before. Ultra-Dense Cellular Network vision consisting of cells of varying sizes with conventional and mmWave bands is being perceived as the panacea for the eminent capacity crunch. However, mobility challenges in an ultra-dense heterogeneous network with motley of high frequency and mmWave band cells will be unprecedented due to plurality of handover instances, and the resulting signaling overhead and data interruptions for miscellany of devices. Similarly, issues like user tracking and cell discovery for mmWave with narrow beams need to be addressed before the ambitious gains of emerging mobile networks can be realized. Mobility challenges are further highlighted when considering the 5G deliverables of multi-Gbps wireless connectivity, <; 1ms latency and support for devices moving at maximum speed of 500km/h, to name a few. Despite its significance, few mobility surveys exist with the majority focused on adhoc networks. This paper is the first to provide a comprehensive survey on the panorama of mobility challenges in the emerging ultra-dense mobile networks. We not only present a detailed tutorial on 5G mobility approaches and highlight key mobility risks of legacy networks, but also review key findings from recent studies and highlight the technical challenges and potential opportunities related to mobility from the perspective of emerging ultra-dense cellular networks.
... Signaling overhead can be greatly reduced if an automatic prediction of the inter-frequency signal quality is provided. Predictions of the inter-frequency radio signal strength in terms of 3GPP LTE Reference Signal Receiver Power (RSRP) has been discussed in [2] by means of Random Forests. In this paper, we instead consider Reference Signal Receiver Quality (RSRQ) as this is a better measure of the actual network performance. ...
Preprint
Radio resource management in cellular networks is typically based on device measurements reported to the serving base station. Frequent measuring of signal quality on available frequencies would allow for highly reliable networks and optimal connection at all times. However, these measurements are associated with costs, such as dedicated device time for performing measurements when the device will be unavailable for communication. To reduce the costs, we consider predictions of inter-frequency radio quality measurements that are useful to assess potential inter-frequency handover decisions. In this contribution, we have considered measurements from a live 3GPP LTE network. We demonstrate that straightforward applications of the most commonly used machine learning models are unable to provide high accuracy predictions. Instead, we propose a novel approach with a duo-threshold for high accuracy decision recommendations. Our approach leads to class specific prediction accuracies as high as 92% and 95%, still drastically reducing the need for inter-frequency measurements.
Article
Machine learning (ML) is an important component for enabling automation in radio access networks (RANs). The work on applying ML for RAN has been under development for several years and is now also drawing attention in 3GPP standardization fora. A key component of multiple features, highlighted in the recent 3GPP specification work, is the use of mobility, traffic and radio channel prediction. These types of predictions form intelligence enablers to leverage the potentials of ML for RAN enhancements, in both current and future wireless networks. Our contributions are twofold, first we provide an overview with representative evaluation results of current applications that utilize these intelligence enablers. Next, we discuss how those enablers likely will be a cornerstone for emerging 6G use cases such as wireless energy harvesting. As the journey to 6G remains an open research area, we highlight how the development of these enablers can unlock new features in future mobile networks.
Article
Full-text available
In this paper, using stochastic geometry, we investigate the average energy efficiency (AEE) of the user terminal (UT) in the uplink of a two-tier heterogeneous network (HetNet), where the two tiers are operated on separate carrier frequencies. In such a deployment, a typical UT must periodically perform inter-frequency small cell discovery (ISCD) process in order to discover small cells in its neighborhood and benefit from the high data rate and traffic offloading opportunity that small cells present. We assume that the base stations (BSs) of each tier and UTs are randomly located and we derive the average ergodic rate and UT power consumption, which are later used for our AEE evaluation. The AEE incorporates the percentage of time a typical UT missed small cell offloading opportunity as a result of the periodicity of the ISCD process. In addition to this, the additional power consumed by the UT due to the ISCD measurement is also included. Moreover, we derive the optimal ISCD periodicity based on the UT’s average energy consumption (AEC) and AEE. Our results reveal that ISCD periodicity must be selected with the objective of either minimizing UT’s AEC or maximizing UT’s AEE.
Conference Paper
Full-text available
In this paper, we investigate the optimal inter-frequency small cell discovery (ISCD) periodicity for small cells deployed on carrier frequency other than that of the serving macro cell. We consider that the small cells and user terminals (UTs) positions are modelled according to a homogeneous Poisson Point Process (PPP). We utilize polynomial curve fitting to approximate the percentage of time the typical UT missed small cell offloading opportunity, for a fixed small cell density and fixed UT speed. We then derive analytically, the optimal ISCD periodicity that minimizes the average UT energy consumption (EC). Furthermore, we also derive the optimal ISCD periodicity that maximizes the average energy efficiency (EE), i.e. bit-per-joule capacity. Results show that the EC optimal ISCD periodicity always exceeds the EE optimal ISCD periodicity, with the exception of when the average ergodic rates in both tiers are equal, in which the optimal ISCD periodicity in both cases also becomes equal.
Article
Full-text available
What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backwards compatibility. And indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities and unprecedented numbers of antennas. But unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.
Article
Full-text available
Heterogeneous network, or HetNet, deployments are one of the key enablers in providing ubiquitous coverage and capacity enhancements for LTE-Advanced networks. They play an important role in achieving high data rate and quality of service requirements defined for next generation wireless networks. In this article we evaluate various cell discovery techniques tailored for energy-efficient detection of small cells deployed in a carrier other than the serving macrocell. The presented schemes are evaluated using extensive system simulations conducted in a 3GPP LTE-Advanced HetNet scenario. Shortcomings of the currently standardized mechanism are analyzed, and advantages of the evaluated schemes are presented. Both the offloading opportunity utilization and savings in UE battery power consumption are analyzed. The results show that using the considered flexible, adaptive, and intelligent schemes for small cell discovery, significant UE power savings can be achieved with small loss in offloading - giving benefits both on system level as well as in user experience.
Article
Full-text available
Pairwise coupling is a popular multi-class classification method that combines together all pairwise comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement.
Article
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Article
A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect differences in the accuracy of diagnostic techniques.
COST action 231: Digital mobile radio towards future generation systems
  • E Damosso
  • L Correira