April 2025
·
13 Reads
Physical Communication
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
April 2025
·
13 Reads
Physical Communication
February 2025
The advance of topological interference management (TIM) has been one of the driving forces of recent developments in network information theory. However, state-of-the-art coding schemes for TIM are usually handcrafted for specific families of network topologies, relying critically on experts' domain knowledge and sophisticated treatments. The lack of systematic and automatic generation of solutions inevitably restricts their potential wider applications to wireless communication systems, due to the limited generalizability of coding schemes to wider network configurations. To address such an issue, this work makes the first attempt to advocate revisiting topological interference alignment (IA) from a novel learning-to-code perspective. Specifically, we recast the one-to-one and subspace IA conditions as vector assignment policies and propose a unifying learning-to-code on graphs (LCG) framework by leveraging graph neural networks (GNNs) for capturing topological structures and reinforcement learning (RL) for decision-making of IA beamforming vector assignment. Interestingly, the proposed LCG framework is capable of recovering known one-to-one scalar/vector IA solutions for a significantly wider range of network topologies, and more remarkably of discovering new subspace IA coding schemes for multiple-antenna cases that are challenging to be handcrafted. The extensive experiments demonstrate that the LCG framework is an effective way to automatically produce systematic coding solutions to the TIM instances with arbitrary network topologies, and at the same time, the underlying learning algorithm is efficient with respect to online inference time and possesses excellent generalizability and transferability for practical deployment.
February 2025
·
14 Reads
Downlink channel estimation remains a significant bottleneck in reconfigurable intelligent surface-assisted cell-free multiple-input multiple-output communication systems. Conventional approaches primarily rely on centralized deep learning methods to estimate the high-dimensional and complex cascaded channels. These methods require data aggregation from all users for centralized model training, leading to excessive communication overhead and significant data privacy concerns. Additionally, the large size of local learning models imposes heavy computational demands on end users, necessitating strong computational capabilities that most commercial devices lack. To address the aforementioned challenges, a coalition-formation-guided heterogeneous federated learning (FL) framework is proposed. This framework leverages coalition formation to guide the formation of heterogeneous FL user groups for efficient channel estimation. Specifically, by utilizing a distributed deep reinforcement learning (DRL) approach, each FL user intelligently and independently decides whether to join or leave a coalition, aiming at improving channel estimation accuracy, while reducing local model size and computational costs for end users. Moreover, to accelerate the DRL-FL convergence process and reduce computational burdens on end users, a transfer learning method is introduced. This method incorporates both received reference signal power and distance similarity metrics, by considering that nodes with similar distances to the base station and comparable received signal power have a strong likelihood of experiencing similar channel fading. Massive experiments performed that reveal that, compared with the benchmarks, the proposed framework significantly reduces the computational overhead of end users by 16%, improves data privacy, and improves channel estimation accuracy by 20%.
February 2025
·
9 Reads
This paper proposes a correlation-based three-stage channel estimation strategy with low pilot overhead for reconfigurable intelligent surface (RIS)-aided millimeter wave (mmWave) multi-user (MU) MIMO systems, in which both users and base station (BS) are equipped with a hybrid RF architecture. In Stage I, all users jointly transmit pilots and recover the uncompressed received signals to estimate the angle of arrival (AoA) at the BS using the discrete Fourier transform (DFT). Based on the observation that the overall cascaded MIMO channel can be decomposed into multiple sub-channels, the cascaded channel for a typical user is estimated in Stage II. Specifically, using the invariance of angles and the linear correlation of gains related to different cascaded subchannels, we use compressive sensing (CS), least squares (LS), and a one-dimensional search to estimate the Angles of Departure (AoDs), based on which the overall cascaded channel is obtained. In Stage III, the remaining users independently transmit pilots to estimate their individual cascaded channel with the same approach as in Stage II, which exploits the equivalent common RIS-BS channel obtained in Stage II to reduce the pilot overhead. In addition, the hybrid combining matrix and the RIS phase shift matrix are designed to reduce the noise power, thereby further improving the estimation performance. Simulation results demonstrate that the proposed algorithm can achieve high estimation accuracy especially when the number of antennas at the users is small, and reduce pilot overhead by more than five times compared with the existing benchmark approach.
January 2025
·
7 Reads
Radio Frequency Fingerprinting Identification (RFFI) is a lightweight physical layer identity authentication technique. It identifies the radio-frequency device by analyzing the signal feature differences caused by the inevitable minor hardware impairments. However, existing RFFI methods based on closed-set recognition struggle to detect unknown unauthorized devices in open environments. Moreover, the feature interference among legitimate devices can further compromise identification accuracy. In this paper, we propose a joint radio frequency fingerprint prediction and siamese comparison (JRFFP-SC) framework for open set recognition. Specifically, we first employ a radio frequency fingerprint prediction network to predict the most probable category result. Then a detailed comparison among the test sample's features with registered samples is performed in a siamese network. The proposed JRFFP-SC framework eliminates inter-class interference and effectively addresses the challenges associated with open set identification. The simulation results show that our proposed JRFFP-SC framework can achieve excellent rogue device detection and generalization capability for classifying devices.
January 2025
·
3 Reads
Large language models (LLMs) have achieved remarkable success across a wide range of tasks, particularly in natural language processing and computer vision. This success naturally raises an intriguing yet unexplored question: Can LLMs be harnessed to tackle channel state information (CSI) compression and feedback in massive multiple-input multiple-output (MIMO) systems? Efficient CSI feedback is a critical challenge in next-generation wireless communication. In this paper, we pioneer the use of LLMs for CSI compression, introducing a novel framework that leverages the powerful denoising capabilities of LLMs -- capable of error correction in language tasks -- to enhance CSI reconstruction performance. To effectively adapt LLMs to CSI data, we design customized pre-processing, embedding, and post-processing modules tailored to the unique characteristics of wireless signals. Extensive numerical results demonstrate the promising potential of LLMs in CSI feedback, opening up possibilities for this research direction.
January 2025
·
4 Reads
Reconfigurable intelligent surfaces (RISs) have been recognized as a revolutionary technology for future wireless networks. However, RIS-assisted communications have to continuously tune phase-shifts relying on accurate channel state information (CSI) that is generally difficult to obtain due to the large number of RIS channels. The joint design of CSI acquisition and subsection RIS phase-shifts remains a significant challenge in dynamic environments. In this paper, we propose a diffusion-enhanced decision Transformer (DEDT) framework consisting of a diffusion model (DM) designed for efficient CSI acquisition and a decision Transformer (DT) utilized for phase-shift optimizations. Specifically, we first propose a novel DM mechanism, i.e., conditional imputation based on denoising diffusion probabilistic model, for rapidly acquiring real-time full CSI by exploiting the spatial correlations inherent in wireless channels. Then, we optimize beamforming schemes based on the DT architecture, which pre-trains on historical environments to establish a robust policy model. Next, we incorporate a fine-tuning mechanism to ensure rapid beamforming adaptation to new environments, eliminating the retraining process that is imperative in conventional reinforcement learning (RL) methods. Simulation results demonstrate that DEDT can enhance efficiency and adaptability of RIS-aided communications with fluctuating channel conditions compared to state-of-the-art RL methods.
January 2025
·
31 Reads
Integrated Sensing and Communications (ISAC) is expected to play a pivotal role in future 6G networks. To maximize time-frequency resource utilization, 6G ISAC systems must exploit data payload signals, that are inherently random, for both communication and sensing tasks. This paper provides a comprehensive analysis of the sensing performance of such communication-centric ISAC signals, with a focus on modulation and pulse shaping design to reshape the statistical properties of their auto-correlation functions (ACFs), thereby improving the target ranging performance. We derive a closed-form expression for the expectation of the squared ACF of random ISAC signals, considering arbitrary modulation bases and constellation mappings within the Nyquist pulse shaping framework. The structure is metaphorically described as an ``iceberg hidden in the sea", where the ``iceberg'' represents the squared mean of the ACF of random ISAC signals, that is determined by the pulse shaping filter, and the ``sea level'' characterizes the corresponding variance, caused by the randomness of the data payload. Our analysis shows that, for QAM/PSK constellations with Nyquist pulse shaping, Orthogonal Frequency Division Multiplexing (OFDM) achieves the lowest ranging sidelobe level across all lags. Building on these insights, we propose a novel Nyquist pulse shaping design to enhance the sensing performance of random ISAC signals. Numerical results validate our theoretical findings, showing that the proposed pulse shaping significantly reduces ranging sidelobes compared to conventional root-raised cosine (RRC) pulse shaping, thereby improving the ranging performance.
January 2025
·
5 Reads
Rainfall impacts daily activities and can lead to severe hazards such as flooding. Traditional rainfall measurement systems often lack granularity or require extensive infrastructure. While the attenuation of electromagnetic waves due to rainfall is well-documented for frequencies above 10 GHz, sub-6 GHz bands are typically assumed to experience negligible effects. However, recent studies suggest measurable attenuation even at these lower frequencies. This study presents the first channel state information (CSI)-based measurement and analysis of rainfall attenuation at 2.8 GHz. The results confirm the presence of rain-induced attenuation at this frequency, although classification remains challenging. The attenuation follows a power-law decay model, with the rate of attenuation decreasing as rainfall intensity increases. Additionally, rainfall onset significantly increases the delay spread. Building on these insights, we propose RainGaugeNet, the first CSI-based rainfall classification model that leverages multipath and temporal features. Using only 20 seconds of CSI data, RainGaugeNet achieved over 90% classification accuracy in line-of-sight scenarios and over 85% in non-lineof-sight scenarios, significantly outperforming state-of-the-art methods.
January 2025
IEEE Wireless Communications Letters
Deep reinforcement learning (DRL) has been widely applied to dynamic resource allocation for the unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) networks. However, it is often challenging to generalize a well-trained DRL model to new MEC scenarios once the system constraints change, since retraining DRL models from scratch is time and energy consuming. In this letter, we jointly optimize the UAV’s trajectory and computing resource allocation for maximizing the fairness-based throughput under the battery capacity and quality of service (QoS) constraints. The sequential optimization problem is formulated as a constrained Markov decision process (CMDP) and solved via the constrained DRL algorithms. To generalize the optimized resource allocation policies across various energy and QoS constraints, we propose an offline pre-training and online fine-tuning based constrained Decision Transformer (CDT) framework. In particular, the CDT is first pre-trained on the training samples collected by the constrained DRL algorithm offline, and then fine-tuned online for rapid adaptation to the unseen constraint thresholds. Simulation results show that compared with the benchmark DRL algorithms, the CDT is capable of effectively improving the fairness-based throughput under the battery capacity and QoS constraints, and demonstrates rapid convergence when constraints change.
... This novel approach, as discussed in [11], introduces a paradigm shift by using orthogonal frequency division multiplexing (OFDM) signals to acquire the target's EM property and identify the material of the target. The integration of multiple base stations (BSs) enhances the performance and accuracy of EM property sensing, as explored in [12], where sensing algorithms and pilot are meticulously designed to optimize the sensing process. Additionally, diffusion models have been employed to refine the EM property sensing in ISAC systems, which offers a robust framework to accurately detect and interpret environmental EM characteristics [13]. ...
January 2025
IEEE Transactions on Wireless Communications
... Even if the known dataset covers a large number of identities, an unseen source enters the communication network, it will be misidentified as a known identity. Additionally, the softmax layer is often considered as the decision layer for classification [10]. However, once the number of classes is determined in softmax, it becomes non-expandable [8], resulting in the trained model lacking the ability to perceive unseen identity data. ...
October 2024
... To address the challenges posed by traditional reinforcement learning's inefficiency in utilizing limited samples and adapting to new tasks, the development of Decision Transformer (DT) represents a significant advance [22][23][24]. DT takes advantage of Transformer's advanced understanding and generalization features to handle complex decision problems. It turns reinforcement learning problems into serial modeling, and combined with the self-attention mechanism in Transformer, it can effectively combine previous decisions when making decisions. ...
January 2025
IEEE Wireless Communications
... Existing codebook schemes, specifically Type I and Type II, are fundamentally based on the assumption of far-field planar wave propagation. This dependency on DFT-based processing leads to significant power diffusion or leakage within the angular power distribution, as presented in [11], [12]. Such power diffusion results in significant inaccuracies in channel estimation because the power of significant path is not accurately captured, subsequently degrading the precoding performance. ...
January 2025
IEEE Communications Magazine
... This feedback process marginally reduces the error rate and, at the same time introduces delay and also it may not predict the rapid fluctuations in the time-varying channel conditions [25]. Recent developments in machine learning and deep learning have demonstrated promising efficiency in addressing complex prediction problems with various applications [26][27][28]. GANs, in particular, earned more attention for their capabilities to generate realistic synthetic data by learning from real-time data models [29]. This capability of GAN will make itself optimum for modeling and predicting wireless channel conditions which are frequently complex and stochastic in nature. ...
September 2024
... Jan et al. [34] discussed the challenges of AI interpretability in Industry 4.0, advocating for transparent and regulatory-compliant AI solutions. Liu et al. [35] and Deng et al. [36] emphasized the importance of integrating XAI into FL and blockchain networks to foster trust and transparency in industrial applications. However, standardized methodologies for deploying XAI in dynamic IIoT environments are lacking, and existing solutions often fail to balance computational efficiency with interpretability. ...
November 2024
Science China Information Sciences
... Despite achieving the unprecedented improvement in the spatial resolution and spectral efficiency, XL-MIMO also faces practical challenges such as expensive hardware cost and high energy expenditure [1], [5]. To address such issues, there has been an upsurge of interest in exploiting various sparse array architectures, including uniform sparse array [6]- [8] and non-uniform sparse array, such as modular, nested, and co-prime arrays [9]- [11]. Compared to the conventional compact array with neighboring elements separated by half wavelength, sparse arrays can achieve a larger array aperture by configuring the antenna spacing larger than half wavelength, without increasing the number of antenna elements. ...
December 2024
IEEE Wireless Communications Letters
... The RSRP image represents the received signal in the beam space, which motivates us to transform the AOA and AOD estimation problem in radio frequency into an object detection problem in images. We propose a deep learning-based framework for estimating, tracking, and identifying mmWave channel multipath parameters as well as distinguishing between LOS and NLOS paths, transforming the entire process into a universal computer vision problem [14]. The real measurement results indicate that the root mean square error (RMSE) of angle estimation is 2.3 • . ...
August 2024
... In our study, we address this delay by proposing a robust and compact strategy to enhance transmission modules when KB updates are not available in real-time. More importantly, our method has been successfully implemented on a testbed [25], comprising software-defined radio (SDR) and embedded signal processing modules that emulate mobile communication devices. Compared to existing testbeds for semantic communication [26], our prototype is not only more portable but also emphasizes the lightweight and real-time capabilities of our approach. ...
August 2024
... This feature gives rise to an additional distance dimension hidden in near-field channels. In fact, by leveraging such a distance dimension, NISE can localize targets with limited bandwidth and a single antenna array [6]. ...
August 2024