Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Semantic Communication (SemCom), notable for ensuring quality of service by jointly optimizing source and channel coding, effectively extracts data semantics, eliminates redundant information, and mitigates noise effects from wireless channel. However, most studies overlook multiple user scenarios and resource availability, limiting real-world applications. This paper addresses this gap by focusing on downlink communication from a base station to multiple users with varying computing capacities. Users employ variants of Swin transformer models for source decoding and a simple architecture for channel decoding. We propose a novel training procedure FRENCA, incorporating transfer learning and knowledge distillation to improve low-computing users' performance. Extensive simulations validate the proposed methods.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Transfer learning particularly enhances the effectiveness of transferring knowledge from one task to related tasks, proving invaluable in contexts where devices handle multiple tasks or face limited training data. For instance, Nguyen et al. [196] demonstrate optimizing multiuser SC through transfer learning and knowledge distillation, significantly boosting performance for users with varying computing capabilities by facilitating knowledge transfer from high-capacity to low-capacity user models. Similarly, Wu et al. [197] introduce a novel transfer learning strategy to guide the training process in object detection with limited labels by leveraging semantic information across tasks, enhancing fewshot detection performance and reducing IoT devices' storage pressures. ...
... Another significant challenge is the complexity of resource allocation considering the computing capacity differences among users [196]. Users in SC systems have varying computing capabilities, necessitating efficient management. ...
Article
Full-text available
Resource management, security, and privacy stand as fundamental pillars for the reliable and secure operation of efficient semantic communications (SC) system. By addressing these aspects, SC system can pave the way for efficient resource utilization, improved network efficiency, enhanced communication performance, and protection of sensitive information. In this study, we begin by presenting the background of SC and reviewing several existing studies in this field. Subsequently, we provide a comprehensive and exhaustive survey of resource management, security, and privacy in SC. We identify and highlight existing challenges and open research challenges related to resource management, security, and privacy in SC in order to spur further investigation in these areas.
... However, as the demand for efficient information services continues to surge, the pressure on wireless networks has increased, prompting numerous efforts to develop advanced algorithms aimed at alleviating the network burden [15]- [18]. Recently, researchers have increasingly leveraged artificial intelligence (AI) to address key challenges in wireless networks, including network performance optimization [19], [20], resource management [21]- [23], and the design of efficient semantic communication systems [24], [25]. These advancements underscore AI's pivotal role in enhancing the efficiency and intelligence of wireless networks. ...
Preprint
Full-text available
Reinforcement learning (RL)-based large language models (LLMs), such as ChatGPT, DeepSeek, and Grok-3, have gained significant attention for their exceptional capabilities in natural language processing and multimodal data understanding. Meanwhile, the rapid expansion of information services has driven the growing need for intelligence, efficient, and adaptable wireless networks. Wireless networks require the empowerment of RL-based LLMs while these models also benefit from wireless networks to broaden their application scenarios. Specifically, RL-based LLMs can enhance wireless communication systems through intelligent resource allocation, adaptive network optimization, and real-time decision-making. Conversely, wireless networks provide a vital infrastructure for the efficient training, deployment, and distributed inference of RL-based LLMs, especially in decentralized and edge computing environments. This mutual empowerment highlights the need for a deeper exploration of the interplay between these two domains. We first review recent advancements in wireless communications, highlighting the associated challenges and potential solutions. We then discuss the progress of RL-based LLMs, focusing on key technologies for LLM training, challenges, and potential solutions. Subsequently, we explore the mutual empowerment between these two fields, highlighting key motivations, open challenges, and potential solutions. Finally, we provide insights into future directions, applications, and their societal impact to further explore this intersection, paving the way for next-generation intelligent communication systems. Overall, this survey provides a comprehensive overview of the relationship between RL-based LLMs and wireless networks, offering a vision where these domains empower each other to drive innovations.
... In addition, the work also considered the wireless channel and network conditions in the transmission process to provide a dynamic semantic communication system that can change the transmission length according to the channel condition or network traffic. Authors in [145] proposed a different approach to solve the problem, where it considered a collaboration among users in the training process. Moreover, the joint source-channel encoder is only trained one time by a trustworthy decoder to prevent the catastrophic forgetting properties of deep neural networks when being trained to serve a mass amount of users. ...
Preprint
Full-text available
Semantic Communication is becoming the next pillar in wireless communication technology due to its various capabilities. However, it still encounters various challenging obstacles that need to be solved before real-world deployment. The major challenge is the lack of standardization across different directions, leading to variations in interpretations and objectives. In the survey, we provide detailed explanations of three leading directions in semantic communications, namely Theory of Mind, Generative AI, Deep Joint Source-Channel Coding. These directions have been widely studied, developed, and verified by institutes worldwide, and their effectiveness has increased along with the advancement in technology. We first introduce the concepts and background of these directions. Firstly, we introduce the Theory of Mind, where the communication agents interact with each other, gaining understanding from observations and slowly forming a common language. Secondly, we present generative AI models, which can create new content and offer more freedom to interpret the data beyond the limitation of semantic meaning compression of raw data before transmitting it. The received signal is then decoded by another generative AI model to execute the oriented task. Thirdly, we review deep learning models to jointly optimize the source and channel coding modules. Then, we present a comprehensive survey of existing works in each direction, thereby offering readers an overview of past achievements and potential avenues for further contribution. Moreover, for each direction, we identify and discuss the existing challenges that must be addressed before these approaches can be effectively deployed in real-world scenarios.
Article
Full-text available
The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.
Article
Semantic communication has gained significant attention from researchers as a promising technique to replace conventional communication in the next generation of communication systems, primarily due to its ability to reduce communication costs. However, little literature has studied its effectiveness in multi-user scenarios, particularly when there are variations in the model architectures used by users and their computing capacities. To address this issue, we explore a semantic communication system that caters to multiple users with different model architectures by using a multi-purpose transmitter at the base station (BS). Specifically, the BS in the proposed framework employs semantic and channel encoders to encode the image for transmission, while the receiver utilizes its local channel and semantic decoder to reconstruct the original image. Our joint source-channel encoder at the BS can effectively extract and compress semantic features for specific users by considering the signal-to-noise ratio (SNR) and computing capacity of the user. Based on the network status, the joint source-channel encoder at the BS can adaptively adjust the length of the transmitted signal. A longer signal ensures more information for high-quality image reconstruction for the user, while a shorter signal helps avoid network congestion. In addition, we propose a hybrid loss function for training, which enhances the perceptual details of reconstructed images. Finally, we conduct a series of extensive evaluations and ablation studies to validate the effectiveness of the proposed system.
Article
Semantic communication in the 6G era has been deemed a promising communication paradigm to break through the bottleneck of traditional communications. However, its applications for the multi-user scenario, especially the broadcasting case, remain under-explored. To effectively exploit the benefits enabled by semantic communication, in this paper, we propose a one-to-many semantic communication system. Specifically, we propose a deep neural network (DNN) enabled semantic communication system called MR DeepSC. By leveraging semantic features for different users, a semantic recognizer based on the pre-trained model, i.e., DistilBERT, is built to distinguish different users. Furthermore, the transfer learning is adopted to speed up the training of new receiver networks. Simulation results demonstrate that the proposed MR_DeepSC can achieve the best performance in terms of BLEU score than the other benchmarks under different channel conditions, especially in the low signal-to-noise ratio (SNR) regime.
Article
While semantic communications have shown the potential in the case of single-modal single-users, its applications to the multi-user scenario remain limited. In this paper, we investigate deep learning (DL) based multi-user semantic communication systems for transmitting single-modal data and multimodal data, respectively. We adopt three intelligent tasks, including, image retrieval, machine translation, and visual question answering (VQA) as the transmission goal of semantic communication systems. We propose a Transformer based framework to unify the structure of transmitters for different tasks. For the single-modal multi-user system, we propose two Transformer based models, named, DeepSC-IR and DeepSC-MT, to perform image retrieval and machine translation, respectively. In this case, DeepSC-IR is trained to optimize the distance in embedding space between images and DeepSC-MT is trained to minimize the semantic errors by recovering the semantic meaning of sentences. For the multimodal multi-user system, we develop a Transformer enabled model, named, DeepSC-VQA, for the VQA task by extracting text-image information at the transmitters and fusing it at the receiver. In particular, a novel layer-wise Transformer is designed to help fuse multimodal data by adding connection between each of the encoder and decoder layers. Numerical results show that the proposed models are superior to traditional communications in terms of the robustness to channels, computational complexity, transmission delay, and the task-execution performance at various task-specific metrics.
Article
With the deployment of the fifth generation (5G) in many countries, people start to think about what the next-generation of wireless communications will be. The current communication technologies are already approaching the Shannon physical capacity limit with advanced encoding (decoding) and modulation techniques. On the other hand, artificial intelligence (AI) plays an increasingly important role in the evolution from traditional communication technologies to the future. Semantic communication is one of the emerging communication paradigms, which works based on its innovative "semantic-meaning passing" concept. The core of semantic communication is to extract the "meanings" of sent information at a transmitter, and with the help of a matched knowledge base (KB) between a transmitter and a receiver, the semantic information can be "interpreted" successfully at a receiver. Therefore, semantic communication essentially is a communication scheme based largely on AI. In this article, an overview of the latest deep learning (DL) and end-to-end (E2E) communication based semantic communications will be given and open issues that need to be tackled will be discussed explicitly.
Article
Semantic communications could improve the transmission efficiency significantly by exploring the semantic information. In this paper, we make an effort to recover the transmitted speech signals in the semantic communication systems, which minimizes the error at the semantic level rather than the bit or symbol level. Particularly, we design a deep learning (DL)-enabled semantic communication system for speech signals, named DeepSC-S. In order to improve the recovery accuracy of speech signals, especially for the essential information, DeepSC-S is developed based on an attention mechanism by utilizing a squeeze-and-excitation (SE) network. The motivation behind the attention mechanism is to identify the essential speech information by providing higher weights to them when training the neural network. Moreover, in order to facilitate the proposed DeepSC-S for dynamic channel environments, we find a general model to cope with various channel conditions without retraining. Furthermore, we investigate DeepSC-S in telephone systems as well as multimedia transmission systems to verify the model adaptation in practice. The simulation results demonstrate that our proposed DeepSC-S outperforms the traditional communications in both cases in terms of the speech signals metrics, such as signal-to-distortion ration and perceptual evaluation of speech distortion. Besides, DeepSC-S is more robust to channel variations, especially in the low signal-to-noise (SNR) regime.
Article
Recent research on joint source channel coding (JSCC) for wireless communications has achieved great success owing to the employment of deep learning (DL). However, the existing work on DL based JSCC usually trains the designed network to operate under a specific signal-to-noise ratio (SNR) regime, without taking into account that the SNR level during the deployment stage may differ from that during the training stage. A number of networks are required to cover the scenario with a broad range of SNRs, which is computational inefficiency (in the training stage) and requires large storage. To overcome these drawbacks our paper proposes a novel method called Attention DL based JSCC (ADJSCC) that can successfully operate with different SNR levels during transmission. This design is inspired by the resource assignment strategy in traditional JSCC, which dynamically adjusts the compression ratio in source coding and the channel coding rate according to the channel SNR. This is achieved by resorting to attention mechanisms because these are able to allocate computing resources to more critical tasks. Instead of applying the resource allocation strategy in traditional JSCC, the ADJSCC uses the channel-wise soft attention to scaling features according to SNR conditions. We compare the ADJSCC method with the state-of-the-art DL based JSCC method through extensive experiments to demonstrate its adaptability, robustness and versatility. Compared with the existing methods, the proposed method takes less storage and is more robust in the presence of channel mismatch.
Article
We propose a joint source and channel coding (JSCC) technique for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols. We parameterize the encoder and decoder functions by two convolutional neural networks (CNNs), which are trained jointly, and can be considered as an autoencoder with a non-trainable layer in the middle that represents the noisy communication channel. Our results show that the proposed deep JSCC scheme outperforms digital transmission concatenating JPEG or JPEG2000 compression with a capacity achieving channel code at low signal-to-noise ratio (SNR) and channel bandwidth values in the presence of additive white Gaussian noise (AWGN). More strikingly, deep JSCC does not suffer from the “cliff effect”, and it provides a graceful performance degradation as the channel SNR varies with respect to the SNR value assumed during training. In the case of a slow Rayleigh fading channel, deep JSCC learns noise resilient coded representations and significantly outperforms separation-based digital communication at all SNR and channel bandwidth values.
Article
We consider the problem of joint source and channel coding of structured data such as natural language over a noisy channel. The typical approach to this problem in both theory and practice involves performing source coding to first compress the text and then channel coding to add robustness for the transmission across the channel. This approach is optimal in terms of minimizing end-to-end distortion with arbitrarily large block lengths of both the source and channel codes when transmission is over discrete memoryless channels. However, the optimality of this approach is no longer ensured for documents of finite length and limitations on the length of the encoding. We will show in this scenario that we can achieve lower word error rates by developing a deep learning based encoder and decoder. While the approach of separate source and channel coding would minimize bit error rates, our approach preserves semantic information of sentences by first embedding sentences in a semantic space where sentences closer in meaning are located closer together, and then performing joint source and channel coding on these embeddings.
Conference Paper
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
Article
An abstract is not available.
Lwfedssl: Resource-efficient layer-wise federated self-supervised learning
  • Y L Tun
  • C M Thwal
  • L Q Huy
  • M N Nguyen
  • C S Hong
Y. L. Tun, C. M. Thwal, L. Q. Huy, M. N. Nguyen, and C. S. Hong, "Lwfedssl: Resource-efficient layer-wise federated self-supervised learning," arXiv preprint arXiv:2401.11647, Jan. 2024.