Recent publications
In the disaster-hit areas where ground network infrastructure has been severely damaged, one challenging problem for multi-UAV-assisted disaster relief networks is how to improve the coverage probability of each UAV. On the basis of solving this problem, the second challenging problem is how to design a channel and power-beam allocation scheme to optimize system throughput while meeting spectrum-energy efficiency constraint. In this paper, we first propose a new method for measuring single UAV coverage quality, which considers both the ratio of effective coverage time to single loop flight time and that of the ground terminals with effective coverage time to the total ground terminals. Then, we develop a set of new algorithms to take advantage of the uneven distribution of ground terminals, which can achieve the total coverage probability improvement and the reduction of deployment costs of UAVs. Finally, we formulate the second problem as Markov decision process (MDP) and develop a solution based on deep deterministic policy gradient (DDPG). Simulation results demonstrate the validity and superiority of our proposed solutions compared with other benchmark strategies in different perspectives.
The Space-Air-Ground Integrated Network (SAGIN) plays a pivotal role as a comprehensive foundational network communication infrastructure, presenting opportunities for highly efficient global data transmission. Nonetheless, given SAGIN's unique characteristics as a dynamically heterogeneous network, conventional network optimization methodologies encounter challenges in satisfying the stringent requirements for network latency and stability inherent to data transmission within this network environment. Therefore, this paper proposes the use of differentiated federated reinforcement learning (DFRL) to solve the traffic offloading problem in SAGIN, i.e., using multiple agents to generate differentiated traffic offloading policies. Considering the differentiated characteristics of each region of SAGIN, DFRL models the traffic offloading policy optimization process as the process of solving the Decentralized Partially Observable Markov Decision Process (DEC-POMDP) problem. The paper proposes a novel Differentiated Federated Soft Actor-Critic (DFSAC) algorithm to solve the problem. The DFSAC algorithm takes the network packet delay as the joint reward value and introduces the global trend model as the joint target action-value function of each agent to guide the update of each agent's policy. The simulation results demonstrate that the traffic offloading policy based on the DFSAC algorithm achieves better performance in terms of network throughput, packet loss rate, and packet delay compared to the traditional federated reinforcement learning approach and other baseline approaches.
The solid-state transformer (SST) is expected to play a critical role in modern power systems, serving as a key component for efficient and flexible energy transformation. The reliability of the SST is crucial. However, active switches are prone to failures, which can have severe consequences. For the matrix-type SST (MT-SST), the time-varying dc-link voltage poses challenges for accurate and rapid fault diagnosis. This article proposes a comprehensive fault diagnosis and fault-tolerant control method for the open-circuit (OC) fault of a single switch in MT-SST. By comparing the estimated and measured values of the resonant capacitor voltages, the algorithm identifies the OC fault. Additionally, a reduction in the duty cycle of specific switches is introduced to assist in fault localization through monitoring of voltage changes. However, this method can only identify the OC fault on one side of MT-SST when the direction of power flow remains constant. To ensure fault tolerance, the primary and secondary side full-bridge is reconfigured into a quasi-half-bridge structure. The developed methods have the advantages simplicity and reliability. Finally, a 1.5 kW prototype is built, and the validity and feasibility of the proposed methods are verified.
This paper presents URSAL, an HDD-only block storage system that achieves ultra-efficiency, reliability, scalability and availability at low cost. Compared to existing block stores such as URSA, Ceph, and Sheepdog, URSAL has the following distinctions. First, since parallelism is harmful to the random I/O performance on HDDs, we restrict URSAL storage servers to conservatively perform parallel I/O on HDDs for avoiding I/O contention and reducing tail latency. Second, URSAL designs a proxy-based storage architecture to separate the high-level and low-level I/O logic, where for each virtual machine (VM) there is one URSAL proxy process running at the client VM side to control (at a high level) the procedure of server-side low-level I/O. Third, to alleviate the problem of low random write performance of HDDs, URSAL selectively performs direct block writes on raw HDDs or indirect log appends to HDD journals (which are then asynchronously replayed to raw HDDs), depending on the characteristics of the workloads. Fourth, software failures are nontrivial in large-scale block storage systems of which the availability is vital to client VMs, and thus for high availability we design an efficient fault-tolerance mechanism by isolating the connection management module of URSAL proxy. We have implemented URSAL and deployed it at scale. Extensive evaluation results demonstrate that URSAL achieves much higher performance than the state-of-the-art solutions for underloaded scenarios.
This article investigates nonsmooth resource allocation problems (RAPs) of autonomous agents, in which the agents have high-order dynamics. Moreover, each agent has a nondifferentiable and private cost function, and the decisions of all agents are restricted by nonlinear network resource constraints and local nonlinear constraints. To the best of our knowledge, there have been no nonsmooth RAPs of high-order agents emerging, much less with nonlinear constraints. Besides, owing to the high-order dynamics, the nonsmooth cost functions and/or the nonlinear constraints, existing distributed algorithms for RAPs are infeasible for our problem. For the purpose of controlling high-order agents to execute nonsmooth resource allocation tasks autonomously, we propose a fully distributed algorithm by means of primal-dual methods and state feedback. In the fully distributed approach, all agents update their control inputs only on the basis of their own and neighbors’ information. Further, we prove the global convergence of the algorithm via nonsmooth analysis and set-valued Lasalle invariance principle. Lastly, we apply the proposed algorithm to the economic dispatch problems (EDPs) of smart grids. By means of our algorithm, the turbine generators can autonomously perform the economic dispatch tasks.
Adaptive Bitrate (ABR) algorithms have become increasingly important for delivering high-quality video content over fluctuating networks. Considering the complexity of video scenes, video chunks can be separated into two categories: those with intricate scenes and those with simple scenes. In practice, it has been observed that improving the quality of intricate chunks yields more substantial improvements in Quality of Experience (QoE) compared with focusing solely on simple chunks. However, the current ABR schemes either treat all chunks equally or rely on fixed linear-based reward functions, which limits their ability to meet real-world requirements. To tackle these limitations, this paper introduces a novel ABR approach called CAST (Complex-scene Aware bitrate algorithm via Self-play reinforcemenT learning), which considers the scene complexity and formulates the bitrate adaptation task as an explicit objective. Leveraging the power of parallel computing with multiple agents, CAST trains a neural network to achieve superior video playback quality for intricate scenes while minimizing playback freezing time. Moreover, we also introduce a new variant of our proposed approach called CAST-DU, to address the critical issue of efficiently managing users' limited cellular data budgets while ensuring a satisfactory viewing experience. Furthermore, we present CAST-Live, tailored for live streaming scenarios with constrained playback buffers and considerations for energy costs. Extensive trace-driven evaluations and subjective tests demonstrate that CAST, CAST-DU, and CAST-Live outperform existing off-the-shelf schemes, delivering a superior video streaming experience over fluctuating networks while efficiently utilizing data resources. Moreover, CAST-Live demonstrates effectiveness even under limited buffer size constraints while incurring minimal energy costs.
Tunable diode laser absorption spectroscopy (TDLAS) is widely used for gas concentration measurements due to its merits of rapid, noncontact, and high-precision detection. Precise control of laser emission can guarantee the accuracy of the absorption spectrum at a specific wavelength. However, in an unstable environment on a real-world pharmaceutical filling production line, it becomes difficult to ensure laser performance due to variations in temperature, pressure, and so on. In contrast to traditional methods for optimizing the internal structure of a laser emission cavity, we propose an external approach to achieve accurate wavelength control of a laser emitter. Specifically, the novel concept of the harmonic double valley inclination (HDVI) is introduced. This HDVI, mined from second harmonic signals, is an identifier that can be used to infer the working status of the laser and plan further instructions and can finally be fed back to the laser controller to achieve closed-loop control logic. Furthermore, all functions are implemented using an FPGA chip and verified on a filling production line. The results show that the stability of the output wavelength of a laser emitter can be maintained using this self-diagnosis method and that accurate oxygen concentration detection can be further ensured.
For single-phase grid-tied voltage-source converters (VSCs), frequency coupling suppression control (FCSC) emerges as a promising technique, streamlining controller design and stability analysis. However, its performance significantly degrades in the presence of grid frequency variations. This article presents an asymmetric synchronous reference frame (ASRF)-based FCSC for single-phase gridtied converters. Leveraging the inverse generalized Park transformation-based symmetrical phase-locked loop (IGPT-SPLL) and the ASRF structure, both grid frequency adaptivity and frequency coupling suppression are realized. Under this control, the studied system is accurately modeled as a simple single-input single-output (SISO) admittance, facilitating design-oriented analysis. The proposed control method stands out for its grid frequency adaptability and the achievement of zero steady-state current error, all accomplished with the utilization of proportional-integral (PI) controllers only. Simulations and experimental results validate the effectiveness of the proposed method.
Native artificial intelligence (AI) has played a pivotal role in shaping the evolution of 6 G networks. It must meet stringent real-time requirements and therefore deploying lightweight AI models is necessary. However, as wireless networks generate a multitude of data fields and only a fraction of them imposes significant impact on the AI models, it is essential to accurately identify a small amount of critical data that significantly impacts communication performance. In this paper, we propose the pervasive multi-level (PML) native AI architecture, which incorporates knowledge graph (KG) into mobile network operations to establish a wireless data KG. Leveraging the wireless data KG, we analyze the relationships among various data fields and provide the on-demand generation of minimal and effective datasets, referred to as feature datasets. Consequently, it not only enhances AI training, inference, and validation processes but also significantly reduces resource wastage and overhead for communication networks. The proposed solution includes a spatio-temporal heterogeneous graph attention neural network model (STREAM) and a feature dataset generation algorithm. Experimental results validate the exceptional capability of STREAM in handling spatio-temporal data and demonstrate that the proposed architecture effectively reduces data scale and computational costs of AI training by almost an order of magnitude.
Silent Speech Interface (SSI) have been developed to convert silent articulatory gestures into speech, facilitating silent speech in public spaces and aiding individuals with aphasia. Prior arts of SSI, either relying on wearable devices or cameras, may lead to extended contact requirements or privacy leakage risks. Recent advancements in acoustic sensing offer new opportunitis for gesture sensing. However, they typically focus on content classification rather than on reconstructing audible speech, leading to the loss of crucial speech characteristics such as speech rate, intonation, and emotion. In this paper, we propose UltraSR, a novel sensing system that supports accurate audible speech reconstruction by analyzing the disturbance of tiny articulatory gestures on the reflected ultrasound signal. The design of UltraSR introduces a multi-scale feature extraction scheme for aggregating information from multiple views, and a new model that provides the unique mapping relationship between ultrasound and speech signals, so that the audible speech can be successfully reconstructed from the silent speech. However, establishing the mapping relationship depends on plenty of training data. Instead of the time-consuming collection of massive amounts of data for training, we construct an inverse task that constitutes a dual form with the original task to generate virtual gestures from widely available audio (e.g., phone calls) for facilitating model training. Furthermore, we introduce a fine-tuning mechanism using unlabeled data for user adaptation. We implement UltraSR using a portable smartphone and evaluate it in various environments. The evaluation results show that UltraSR can reconstruct speech with a (Character Error Rate) CER as low as 5.22%, and decrease the CER from 80.13% to 6.31% on new users with only 1 hour of ultrasound signals provided, which outperforms state-of-the-art acoustic-based approaches while preserving rich speech information.
Synthetic traffic generation can produce sufficient data for model training of various traffic analysis tasks for IoT networks with few costs and ethical concerns. However, with the increasing functionalities of the latest smart devices, existing approaches can neither customize the traffic generation of various device functions nor generate traffic that preserves the sequentiality among packets as the real traffic. To address these limitations, this paper proposes IoTGemini, a novel framework for high-quality IoT traffic generation, which consists of a Device Modeling Module and a Traffic Generation Module. In the Device Modeling Module, we propose a method to obtain the profiles of the device functions and network behaviors, enabling IoTGemini to customize the traffic generation like using a real IoT device. In the Traffic Generation Module, we design a Packet Sequence Generative Adversarial Network (PS-GAN), which can generate synthetic traffic with high fidelity of both per-packet fields and sequential relationships. We set up a real-world IoT testbed to evaluate IoTGemini. The experiment result shows that IoTGemini can achieve great effectiveness in device modeling, high fidelity of synthetic traffic generation, and remarkable usability to downstream tasks on different traffic datasets and downstream traffic analysis tasks.
Despite remarkable advancements in graph contrastive learning techniques, the identification of interdependent relationships when maximizing cross-view mutual information remains a challenging issue, primarily due to the complexity of graph topology. In this study, we propose to formulate cross-view interdependence from the innovative perspective of information flow. Accordingly, IDEAL, a simple yet effective framework, is proposed for interdependence-adaptive graph contrastive learning. Compared with existing methods, IDEAL concurrently addresses same-node and distinct-node interdependence, circumvents the reliance on additional distribution mining techniques, and is augmentation-aware. Besides, the objective of IDEAL takes advantage of both contrastive and generative learning objectives and is thus capable of learning a uniform embedding distribution while retaining essential semantic information. The effectiveness of IDEAL is validated by extensive empirical evidence. It consistently outperforms state-of-the-art self-supervised methods by considerable margins across seven benchmark datasets with diverse scales and properties and, at the same time, showcases promising training efficiency. The source code is available at:
https://github.com/sunisfighting/IDEAL
.
Optical communication technology has been widely used in national defense and civilian fields due to its high capacity, low power consumption and high stability. Applying fiber optic communication technology to bus control systems and replacing electrical buses shows great application prospects. In this paper, a novel industrial optical bus (IOB) control system is proposed based on fiber optic communication and control theory. First, the optical bus control theory and topology model are proposed, and the theoretical basis and functions of each module are analyzed in detail. Then, the optical bus hardware control system is established based on the proposed design scheme, and its synchronization and bandwidth are experimentally analyzed. Finally, the IOB is applied to a high-speed and high-precision optical coupling system for optical devices to verify the reliability of the IOB in practical applications. Extensive simulation and experimental results show that the maximum synchronization error is only 9.96 ns and the optical signal jitter is < 1 ns for the optical bus system with multiple terminals. In the 5km relay-free single-mode fiber transmission experiment, the transmission period between the optical head end (OHE) and the optical termina (OT) is only 89.43 ÎĽs, which proves the real-time performance of the IOB system. The reliability and practicality of the IOB is finally demonstrated in the experiment of the fiber-coupled platform based on the IOB system.
This paper proposes a single-phase single-stage non-isolated buck-boost inverter for photovoltaic (PV) systems. It is obtained by combining and reconfiguring two dc-dc circuits, Zeta and canonical switching cell (CSC). In the proposed inverter, Zeta and CSC will operate alternately at positive and negative half cycles, respectively. The common mode leakage current is eliminated due to the fact that there is a common terminal in input and output ports in both modes. Besides, only one switch operates at high frequency in each mode, resulting in high efficiency. This paper firstly introduced the derivation process of the proposed inverter and its operation principle. Then, the controllers and inverter parameters in both modes are designed. Finally, its performance is verified with a 500 W laboratory prototype.
In recent years, since edge computing has improved the performance of transportation systems, research on edge computing-enabled transportation systems has received widespread attention. However, most previous studies overlooked that task requests in transportation systems are unevenly distributed in time and space, which easily causes the overloading of edge servers, resulting in high response latency. To this end, we present a novel task offloading scheme based on Graph Neural Network (GNN) and Deep Reinforcement Learning (DRL) in Edge computing-enabled Transportation systems (TransEdge). Specifically, we first propose an adaptive node placement algorithm to assign IoT sensors to appropriate edge servers, thereby minimizing transmission latency. Then, an improved DRL scheme based on GNN is designed to capture the spatial features between sensors, aiming to improve the accuracy of task offloading decisions. Finally, we introduce a task forwarding strategy based on the greedy algorithm to achieve collaborative task offloading between different edge servers and overcome the system instability caused by a sudden surge in task requests. We conduct extensive experiments on two real-world traffic datasets. The results show that TransEdge reduces the response latency by at least 3.7% compared to four baselines while achieving a success rate of 99%.
A nonlinear dynamic model of angular contact ball bearing under combined external loads is established based on the nonlinear elastic Hertz contact theory and raceless control theory. The established dynamic model is solved by employing variable step size Newton Raphson iteration method and the validation of model is verified by comparing the proposed results with the corresponding results come from the existing literatures. The solution of the presented model has high accuracy compared with the existing experimental results. In addition, the computational efficiency of the proposed model is improved significantly by introducing iteration step size adjustment factor. According to the proposed model, the dynamic contact and stiffness characteristics of angular contact ball are studied systematically by investigating the effects of combined external load working conditions on the contact parameters including contact angle and contact force and the stiffness parameters including diagonal stiffness and off-diagonal stiffness. This research can provide the theoretical basis and technique guidance for the designation and manufacture of angular contact ball bearing under combined external loads.
Adaptive BitRate (ABR) algorithms have become increasingly prevalent in modern streaming platforms, offering users significant improvements in the Quality of Experience (QoE). With streaming providers like YouTube and Netflix shifting to high-fidelity audio formats such as stereophonic sound and Dolby Atoms, ensuring proper audio and video adaptation has become a critical aspect of modern streaming platforms. Additionally, Variable Bitrate (VBR) encoding has gained great popularity in encoding audio and video content, given its higher quality-to-bits ratio. However, the considerable variability in network bandwidth, in combination with VBR features such as significantly fluctuating audio/video chunk sizes and diverse content complexity, makes existing ABR schemes formidable to make optimal bitrate selection due to their overlook of audio adaptation or oblivious to VBR features. In this paper, we introduce a new ABR approach for
V
BR-based
A
udio-aware video
S
tr
E
aming named VASE, which harnesses deep reinforcement learning (DRL) and exploits parallel computing with multiple agents to swiftly and adeptly manage fluctuations in video/audio chunk sizes, network bandwidth, and varying content complexity, all while operating without any assumptions. Besides, two variants are proposed to mitigate the download energy cost and handle audio and video content in finer granularity. Extensive trace-driven, testbed, and subjective evaluations show that our scheme surpasses existing advanced adaptation schemes regarding the overall QoE, effectively demonstrating its superiority.
Significant pulsating power exists on the dc side of three-phase four-wire inverters in the case of unbalanced or nonlinear loads, which will degrade the power quality of the dc side voltage. To address this issue, this paper presents an active power decoupling control scheme for a three-level buck four-leg current source inverter (3L-Buck-4L-CSI) typology. By fully utilizing the flying capacitor converter, this scheme can effectively buffer the pulsating power into the flying capacitor without modifying any hardware circuit and suppress the pulsating input current. In addition, a virtual resistor scheme is adopted to improve the stability of the converter. The experimental results validate the effectiveness of the proposed method.
As a promising approach, Clustered Federated Learning (CFL) enables personalized model aggregation for heterogeneous clients. However, facing dynamic and open edge networks, previous CFL rarely considers the impact of dynamic client data on clustering validity, or sensitively identifies low-quality parameters from highly heterogeneous client models. Moreover, the device heterogeneity in each cluster leads to unbalanced model transmission delay, thus reducing the system efficiency. To tackle the above issues, this paper proposes a Robust and Efficient Clustered Federated System (REC-Fed). First, a Hierarchical Attention based Robust Aggregation (HARA) method is designed to realize layer-wise model customization for clients, meanwhile keeping the clustering validity under dynamic client data distribution. In addition, the fine-grained parameter detection in HARA provides a natural advantage to detect low-quality parameters, which improves the robustness of CFL systems. Second, to realize efficient synchronous model transmission, an Adaptive Model Transmission Optimization (AMTO) is proposed to jointly optimize the model compression and bandwidth allocation for heterogenous clients. Finally, we theoretically analyze the convergence of REC-Fed and conduct experiments on several personalization tasks, which demonstrate that our REC-Fed has significant improvement on flexibility, robustness and efficiency.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Changsha, China
Website