Conference Paper

Predicting Wireless Channel Quality by Means of Moving Averages and Regression Models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Many research works made use of machine learning (ML) for channel quality prediction in Wi-Fi [16]: in [17] artificial neural networks (ANN) were applied to artificial data; in [18], [19] they were exploited on data derived from real devices; in [20], [21], [22] ML is used for the prediction of the channel gain and/or the received signal strength. The use of ANNs is quite expensive from the computational point of view, and consequently in [23] the benefits provided by less CPU-hungry approaches like moving averages and regression models were analyzed in depth. That work showed that the exponential moving average (EMA) is able to offer the best performance among non-ANN models, and behaved almost as well as an ANN trained only on frame losses. ...
... The most recent value y i given by (1) was taken as the prediction for the target, under the reasonable assumption that, despite they are displaced by T f /2, it constitutes the best possible estimate. The adoption of predictive models like those based on linear and polynomial regression has been analyzed in [23], where their showed poorer prediction accuracy than EMA. ...
... As analysed in a very preliminary way in [23], the linear combination of EMA models may offer better prediction accuracy than any one of them considered separately. In that paper, the sequence of α weights was statically selected as αA = ( α * 3 , α * , 3α * ) and the three models were equally weighted (λ (α1) = λ (α2) = λ (α3) = 1 3 ). ...
... The EMA model is a smoothing technique designed to assign exponentially decreasing weights to past observations. It has proven to be effective in real problems in comparison with other techniques [18]. This implies that recent data points carry more influence than older ones, allowing the EMA to effectively capture the most recent trends in the data. ...
... The EMA stands out as a straightforward yet effective technique for smoothing time series data, offering a versatile tool for estimating the current FDR in various applications. For further discussion on the EMA and its applications in time series analysis applied to wireless communication, see [18], [19]. ...
Preprint
Predicting the behavior of a wireless link in terms of, e.g., the frame delivery ratio, is a critical task for optimizing the performance of wireless industrial communication systems. This is because industrial applications are typically characterized by stringent dependability and end-to-end latency requirements, which are adversely affected by channel quality degradation. In this work, we studied two neural network models for Wi-Fi link quality prediction in dense indoor environments. Experimental results show that their accuracy outperforms conventional methods based on exponential moving averages, due to their ability to capture complex patterns about communications, including the effects of shadowing and multipath propagation, which are particularly pronounced in industrial scenarios. This highlights the potential of neural networks for predicting spectrum behavior in challenging operating conditions, and suggests that they can be exploited to improve determinism and dependability of wireless communications, fostering their adoption in the industry.
... The rapid development of wireless communication technologies, notably fifth generation (5G) networks, has improved data transmission speeds, connection, and network performance [1]. However, the rapid growth of these networks demands accurate path loss (PL) prediction for efficient network design, spectrum allocation, and communication quality [2], [3]. Traditional Empirical models for predicting PL often fail to capture the complexities of dynamic 5G environments due to their reliance on preestablished characteristics and assumptions, leading to inaccurate predictions and suboptimal network performance [4]- [7]. ...
Article
Accurate path loss prediction is vital for optimizing 5G wireless network performance, particularly in dense environments. This study investigates the effectiveness of deep learning approaches, specifically Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory Networks (LSTMs), in predicting path loss in 5G. Specifically, key factors influencing path loss prediction, including frequency, distance, and path characteristics, are examined. A comprehensive evaluation using a simulated dataset with varied communication scenarios reveals that RNNs outperform CNNs and LSTMs, achieving an R-squared value of 0.9532, Mean Absolute Error (MAE) of 4.4558, Mean Squared Error (MSE) of 31.3522, and Root Mean Squared Error (RMSE) of 5.5993. These results demonstrate significant improvements in ML-based models over traditional empirical models and highlight the superiority of RNNs in capturing complex temporal dependencies, including weather conditions as input features. This study contributes to understanding deep learning-based path loss prediction, providing valuable insights for optimal design and planning of 5G networks. Comparative analysis with CNNs, RNNs, and LSTMs underscores the importance of model selection in achieving accurate predictions.
Conference Paper
Full-text available
Industrial communication systems provide deter-ministic and reliable communication between various industrial components. In the past several decades, different communication technologies (Fieldbus, Real-Time Ethernet (RTE)) were used to achieve such determinism. Recently, Time-Sensitive Networking (TSN) is being utilized in industrial environments to support end-to-end low latency deterministic communication by providing mechanisms for accurate time synchronization, traffic scheduling/shaping, and reliability. With many use cases requiring portability and seamless mobility, such features are being developed for wireless networks as well, expanding the time-sensitive communication to the wireless domain. Wireless TSN's aim is to provide wired TSN-like features, achieving wired-wireless interoperability and flattening the automation system pyramid. In this paper, we present an integration between the wireless TSN and PROFINET. We show that the safety-related applications can be supported seamlessly, providing deterministic communication and reliability under best-effort traffic load in the wireless network. The solution is evaluated in terms of the achieved end-to-end latency and the probability of failure per hour of the fail-safe communication. It is shown that by using wireless time-sensitive networking with dedicated time slots per traffic flow a safety integrity level up to grade 4 can be achieved.
Article
Full-text available
Lifetime of motes in wireless sensor networks can be enlarged by decreasing the energy spent for communication. Approaches like time slotted channel hopping pursue this goal by performing frame exchanges according to a predefined schedule, which helps reducing the duty cycle. Unfortunately, whenever the receiving radio interface is active but nobody in the network is transmitting, idle listening occurs. If the traffic pattern is known in advance, as in the relevant case of periodic sensing, proactive reduction of idle listening (PRIL) noticeably lowers energy waste by disabling receivers when no frames are expected for them. Optimal PRIL operation demands that, at any time, the transmitter and receiver sides of a link have a coherent view of its state (either enabled or disabled). However, this is not ensured in the presence of acknowledgment frame losses. This paper presents and analyzes some strategies to cope with such events. An extensive experimental campaign has been carried out through discrete event simulation to determine what consequences above errors may have from both a functional and performance viewpoint. Results show that, although no strategy is optimal in all circumstances, different solutions can be profitably adopted depending on the specific operating conditions.
Article
Full-text available
Wireless local area networks (WLANs) empowered by IEEE 802.11 (Wi-Fi) hold a dominant position in providing Internet access thanks to their freedom of deployment and configuration as well as the existence of affordable and highly interoperable devices. The Wi-Fi community is currently deploying Wi-Fi 6 and developing Wi-Fi 7, which will bring higher data rates, better multi-user and multi-AP support, and, most importantly, improved configuration flexibility. These technical innovations, including the plethora of configuration parameters, are making next-generation WLANs exceedingly complex as the dependencies between parameters and their joint optimization usually have a non-linear impact on network performance. The complexity is further increased in the case of dense deployments and coexistence in shared bands. While classical optimization approaches fail in such conditions, machine learning (ML) is able to handle complexity. Much research has been published on using ML to improve Wi-Fi performance and solutions are slowly being adopted in existing deployments. In this survey, we adopt a structured approach to describe the various Wi-Fi areas where ML is applied. To this end, we analyze over 250 papers in the field, providing readers with an overview of the main trends. Based on this review, we identify specific open challenges and provide general future research directions.
Article
Full-text available
Devices in wireless sensor networks are typically powered by batteries, which must last as long as possible to reduce both the total cost of ownership and potentially pollutant wastes when disposed of. By lowering the duty cycle to the bare minimum, time slotted channel hopping manages to achieve very low power consumption, which makes it a very interesting option for saving energy, e.g., at the perception layer of the Internet of Things. In this paper, a mechanism based on probabilistic blacklisting is proposed for such networks, which permits to lower power consumption further. In particular, channels suffering from non-negligible disturbance may be skipped based on the perceived quality of communication so as to increase reliability and decrease the likelihood that retransmissions have to be performed. The only downside of this approach is that the transmission latency may grow, but this is mostly irrelevant for systems where the sampling rates are low enough.
Article
Full-text available
Artificial intelligence is one of the key technologies behind the Industry 4.0 (r)evolution. It can be profitably employed in a variety of different applications contexts and with different goals, most of which are characterized by the fact that reliable models for some parts of the involved systems either do not exist or are unavailable. In this paper we tried to exploit artificial neural networks to predict the quality of the transmission channel in Wi-Fi better than what techniques employed in conventional adaptive solutions permit. Applicability of such an approach is quite broad, but we believe that the main intended goal is to improve communication dependability and system resilience. Preliminary results highlighted that artificial neural networks show higher prediction accuracy, and there is still extensive room for improvements. This article is protected by copyright. All rights reserved.
Article
Full-text available
Industry 5.0 is regarded as the next industrial evolution, its objective is to leverage the creativity of human experts in collaboration with efficient, intelligent and accurate machines, in order to obtain resource-efficient and user-preferred manufacturing solutions compared to Industry 4.0. Numerous promising technologies and applications are expected to assist Industry 5.0 in order to increase production and deliver customized products in a spontaneous manner. To provide a very first discussion of Industry 5.0, in this paper, we aim to provide a survey-based tutorial on potential applications and supporting technologies of Industry 5.0. We first introduce several new concepts and definitions of Industry 5.0 from the perspective of different industry practitioners and researchers. We then elaborately discuss the potential applications of Industry 5.0, such as intelligent healthcare, cloud manufacturing, supply chain management and manufacturing production. Subsequently, we discuss about some supporting technologies for Industry 5.0, such as edge computing, digital twins, collaborative robots, Internet of every things, blockchain, and 6G and beyond networks. Finally, we highlight several research challenges and open issues that should be further developed to realize Industry 5.0.
Article
Full-text available
IEEE 802.11p standard is specially developed to define vehicular communications requirements and support cooperative intelligent transport systems. In such environment, reliable channel estimation is considered as a major critical challenge for ensuring the system performance due to the extremely time-varying characteristic of vehicular channels. The channel estimation of IEEE 802.11p is preamble based, which becomes inaccurate in high mobility scenarios. The major challenge is to track the channel variations over the course of packet length while adhering to the standard specifications. The motivation behind this paper is to overcome this issue by proposing a novel deep learning based channel estimation scheme for IEEE 802.11p that optimizes the use of deep neural networks (DNN) to accurately learn the statistics of the spectral temporal averaging (STA) channel estimates and to track their changes over time. Simulation results demonstrate that the proposed channel estimation scheme STA-DNN significantly outperforms classical channel estimators in terms of bit error rate. The proposed STA-DNN architectures also achieve better estimation performance than the recently proposed auto-encoder DNN based channel estimation with at least 55.74% of computational complexity decrease.
Article
Full-text available
By adapting transmission parameters such as the constellation size, coding rate, and transmit power to instantaneous channel conditions, adaptive wireless communications can potentially achieve great performance. To realize this potential, accurate channel state information (CSI) is required at the transmitter. However, unless the mobile speed is very low, the obtained CSI quickly becomes outdated due to the rapid channel variation caused by multi-path fading. Since outdated CSI has a severely negative impact on a wide variety of adaptive transmission systems, prediction of future channel samples is of great importance. The traditional stochastic methods, modeling a time-varying channel as an autoregressive process or as a set of propagation parameters, suffer from marginal prediction accuracy or unaffordable complexity. Taking advantage of its capability on time-series prediction, applying a recurrent neural network (RNN) to conduct channel prediction gained much attention from both academia and industry recently. The aim of this article is to provide a comprehensive overview so as to shed light on the state of the art in this field. Starting from a review on two model-based approaches, the basic structure of a recurrent neural network, its training method, RNN-based predictors, and a prediction-aided system, are presented. Moreover, the complexity and performance of predictors are comparatively illustrated by numerical results.
Article
Full-text available
Internet of Things (IoT) is an emerging domain that promises ubiquitous connection to the Internet, turning common objects into connected devices. The IoT paradigm is changing the way people interact with things around them. It paves the way to creating pervasively connected infrastructures to support innovative services and promises better flexibility and efficiency. Such advantages are attractive not only for consumer applications, but also for the industrial domain. Over the last few years, we have been witnessing the IoT paradigm making its way into the industry marketplace with purposely designed solutions. In this paper, we clarify the concepts of IoT, Industrial IoT, and Industry 4.0. We highlight the opportunities brought in by this paradigm shift as well as the challenges for its realization. In particular, we focus on the challenges associated with the need of energy efficiency, real-time performance, coexistence, interoperability, and security and privacy. We also provide a systematic overview of the state-of-the-art research efforts and potential research directions to solve Industrial IoT challenges.
Conference Paper
Full-text available
Wireless communication between a pair of nodes can suffer from self interference arising from multipath propagation reflecting off obstacles in the environment. In the event of a deep fade, caused by destructive interference, no signal power is seen at the receiver, and so communication fails. Multipath fading can be overcome by shifting the location of one node, or by switching the communication carrier frequency. The effects of such actions can be characterized by the coherence length (L) and coherence bandwidth (B), respectively, given as the amount of shift necessary to transition from a deep fade to a region of average signal strength. Experimental results for a representative 2.4GHz wireless link indicate L = 5.5cm and B can vary from 5MHz at long ranges up to 15MHz for short links. For wireless sensor networks (WSNs), typically operating under the IEEE802.15.4 standard, multipath effects are therefore best handled by a channel hopping scheme in which successive communication attempts are widely spread across available carrier frequencies.
Article
Wireless Industry 4.0 applications typically have stringent latency and reliability requirements. Even though state-of-the-art Wi-Fi networks can reliably achieve single digit milliseconds latency, new emerging time-critical applications have requirements that current Wi-Fi cannot meet. In this paper, we present next generation Wi-Fi technologies and describe how they can be leveraged to enable three time-critical Industry 4.0 use cases: wireless industrial automation control, remote rendering in extended reality applications and cooperative simultaneous localization and mapping using autonomous mobile robots in a factory plant.
Article
Industrial monitoring and control applications typically require real-time communications between a large number of nodes distributed over the plant. For the sake of network manageability and to reduce the overall network workload, wireless nodes are organized in clusters, which typically encompass neighboring nodes that frequently exchange data with each other. However, different clusters also cooperate to realize distributed applications and this raises the need for enabling communications between multiple clusters spread over large areas in the plant. This paper presents LoRaBLE, a long-range communication protocol that leverages the Long Range (LoRa) technology to provide inter-cluster communications over Bluetooth Low Energy networks with bounded delays, so as to meet the time constraints of real-time industrial traffic flows. The paper presents the design of LoRaBLE and a proof-of-concept implementation on a lab testbed made up of commercial-off-the-shelf devices.
Article
In this article, we describe a methodology and associated models to evaluate a time sensitive collaborative robotics application enabled by wireless time sensitive networking (WTSN) capabilities. We also present a method to configure WTSN scheduling to meet the application time budget and validate it in a realistic industrial use case. We detail the methodologies for implementing and characterizing the performance of key WTSN capabilities, namely time synchronization and time-aware scheduling, over an IEEE 802.11 based network. We deploy the WTSN capabilities with a collaborative robotic workcell consisting of two robotic arms, which emulate a material handling application, known as machine tending. We further explore configurations and measurement methodologies to characterize application performance of this use case and correlate it to the performance of the wireless network.
Article
This article identifies the advances, advantages, limitations, requirements and current methodologies in implementing the strategic Industry 4.0 (I4.0) initiative. It focuses on all research works mainly on production planning. To do so, it proposes a taxonomy of the principles of I4.0 design terms that contemplates the following classification aspects: interconnection/connectivity, decentralised decision making, technical assistance, the human factor, intelligence/awareness, interoperability, information transparency, technology, organisation, conceptual frameworks and production planning. It also presents the models, algorithms, heuristics and metaheuristics of the components used in relation to an I4.0 setting. Finally, a considerable number of reference conceptual frameworks is analysed, which allow the term I4.0 to be defined.
Article
The real and effective ground of all new concepts dedicated to the current advanced factories, as well as to the future digital ones, is close cooperativity of scattered applications in highly heterogeneous systems. Communication is the key enabling component, and all new approaches are inspired in practice to the demanding characteristics of industrial networks. These kinds of computer networks, together with new technologies derived from distant application fields, are the main technological means to accelerate the fast evolution of modern factory systems. Due to various communication requirements coming from the plurality of structures, components and application contexts, communication subsystems must be increasingly heterogeneous. Let us say clearly: this evolution cannot be stopped at this stage, no special universal solution is possible, and thinking about monogamous networking is a kind of dreamland. This paper is an analysis of the state of the art in the matter of heterogeneous networking in industry. It deeply investigates both wired and wireless technologies from the point of view of technological aspects and relevant key performance indicators, such as those related to dependability, and it contains a prospective estimation of future trends.
Article
In distributed control systems where devices are connected through Wi-Fi, direct access to low-level medium access control (MAC) operations may help applications to meet their timing constraints. In particular, the ability to timely control single transmission attempts on air, by means of software programs running at the user space level, eases the implementation of mechanisms aimed at improving communication timeliness and reliability. Relevant examples are deterministic traffic scheduling, seamless channel redundancy, rate adaptation algorithms, and so on. In this paper, a novel architecture is defined, we call software-defined MAC (SDMAC), which in its current embodiment relies on conventional Linux PCs equipped with commercial Wi-Fi adapters. Preliminary SDMAC implementation on a real testbed and its experimental evaluation showed that integrating this paradigm in the existing protocol stacks constitutes a viable option, whose performance suits a wide range of applications characterized by soft real-time requirements.
Chapter
This chapter provides a comprehensive introduction to channel prediction methods with an emphasis on neural network‐based prediction. It first briefly describes adaptive transmission systems using transmit antenna selection and opportunistic relaying as examples, followed by the impact of outdated channel state information (CSI) on the performance of adaptive transmission systems. The chapter provides a mathematical model to quantify the inaccuracy of outdated CSI and then uses the opportunistic relay selection system as an example to illustrate the impact of outdated CSI on the performance of adaptive transmission systems. It then reviews two kinds of classical prediction methods: parametric and autoregressive models. The chapter also details the principles of recurrent neural network‐based predictors applied from flat‐fading single‐antenna channels to frequency‐selective multi‐antenna channels, as well as their achievable performance and computational complexity.
Article
Accurately modeling and predicting wireless channelquality variations is essential for a number of networking applications such as scheduling and improved video streaming over 4G LTE networks and bit rate adaptation for improved performance in WiFi networks. In this paper, we design DeepChannel, an encoder-decoder based sequence-to-sequence deep learning model that is capable of predicting future wireless signal strength variations based on past signal strength data. We consider two different versions of DeepChannel; the first and second versions use LSTM and GRU as their basic cell structure, respectively. In contrast to prior work that is primarily focused on designing models for particular network settings, DeepChannel is highly adaptable and can predict future channel conditions for different networks, sampling rates, mobility patterns, and communication standards. We compare the performance (i.e., the root mean squared error, mean absolute error and relative error of future predictions) of DeepChannel with respect to two baselines— i) linear regression, and ii) ARIMA for multiple networks and communication standards. In particular, we consider 4G LTE, WiFi, WiMAX, an industrial network operating in the 5.8 GHz range, and Zigbee networks operating under varying levels of user mobility and observe that DeepChannel provides significantly superior performance. Finally, we provide a detailed discussion of the key design decisions including insights into hyper-parameter tuning and the applicability of our model in other networking scenarios.
Article
The paper addresses state estimation for clock synchronization in the presence of factors affecting the quality of synchronization. Examples are temperature variations and delay asymmetry. These working conditions make synchronization a challenging problem in many wireless environments, such as Wireless Sensor Networks or WiFi. Dynamic state estimation is investigated as it is essential to overcome non-stationary noises. The two-way timing message exchange synchronization protocol has been taken as a reference. No a-priori assumptions are made on the stochastic environments and no temperature measurement is executed. The algorithms are unequivocally specified offline, without the need of tuning some parameters in dependence of the working conditions. The presented approach reveals to be robust to a large set of temperature variations, different delay distributions and levels of asymmetry in the transmission path.
Article
The adoption of wireless communications and, in particular, Wi-Fi, at the lowest level of the factory automation hierarchy has not increased as fast as expected so far, mainly because of serious issues concerning determinism. Actually, besides the random access scheme, disturbance and interference prevent reliable communication over the air and, as a matter of fact, make wireless networks unable to support distributed real-time control applications properly. Several papers recently appearing in literature suggest that diversity could be leveraged to overcome this limitation effectively. In this paper, a reference architecture is introduced, which describes how seamless link-level redundancy can be applied to Wi-Fi. The framework is general enough to serve as a basis for future protocol enhancements, and also includes two optimizations aimed at improving the quality of wireless communication by avoiding unnecessary replicated transmissions. Some relevant solutions have been analyzed by means of a thorough simulation campaign, in order to highlight their benefits when compared with conventional Wi-Fi. Results show that both packet losses and network latencies improve noticeably.
Article
This paper addresses the problem of time offset synchronization in the presence of temperature variations, which lead to a non-Gaussian environment. In this context, regular Kalman filtering reveals to be suboptimal. A functional optimization approach is developed in order to approximate optimal estimation of the clock offset between master and slave. A numerical approximation is provided to this aim, based on regular neural network training. Other heuristics are provided as well, based on spline regression. An extensive performance evaluation highlights the benefits of the proposed techniques, which can be easily generalized to several clock synchronization protocols and operating environments.
Conference Paper
Myanmar is an agricultural country and its economy is largely based upon crop productivity. The occurrence of extreme precipitation variability may lead to significantly reduce crop yields and extensive crop losses. Thus, rainfall prediction becomes an important issue in Myanmar. Regression has since long been a major data analytic tool in many scientific such as behavioral sciences, social sciences, biological sciences, medical sciences, psychometrics and econometrics for predicting. Multi variables polynomial regression (MPR) is one of the statistical regression method used to describe complex nonlinear input output relationships. In this paper, MPR is applied to implement the precipitation forecast model over Myanmar. Myanmar receives its annual rainfall during the summer monsoon season which starts in June and end in September. The model output result is station wide monthly and annual rainfall amount during summer monsoon season. The proposed model results are compared with the result produced by multiple linear regression model (MLR). From the experimental results, it is observed that using MPR method achieves closer agreement between actual and estimated rainfall than using MLR.
Article
Presented here is a unified approach to evaluating the error-rate performance of digital communication systems operating over a generalized fading channel. What enables the unification is the recognition of the desirable form for alternate representations of the Gaussian and Marcum Q-functions that are characteristic of error-probability expressions for coherent, differentially coherent, and noncoherent forms of detection. It is shown that in the largest majority of cases, these error-rate expressions can be put in the form of a single integral with finite limits and an integrand composed of elementary functions, thus readily enabling numerical evaluation
Industry 5.0: A survey on enabling technologies and potential applications
  • Maddikunta