Jiaqi Shi’s research while affiliated with China Agricultural University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Illustration of the sliding-window computation network (SWCN) architecture. The network follows an encoder–decoder structure designed for efficient multimodal spatial and temporal data processing. The encoder extracts multi-scale features (F1,F2,F3,F4) through sliding-window attention layers, where queries and key–value pairs are denoted as (Xi,q) and (Xi,kv). The decoder progressively upsamples hierarchical features (D1,D2,D3,D4) and concatenates them before passing through a multi-layer perceptron (MLP) to generate the final mask output. The notation Cat[p1(F1),p2(F2),p3(F3),p4(F4)] represents feature concatenation.
Sliding-window attention mechanism.
Time-series and space fusion module.
The figure presents the violin plot of accuracy distribution for different baseline models and the proposed method in the time-series data testing results experiment.
The figure presents the violin plot of accuracy distribution for different baseline models and the proposed method in the spatial data testing results experiment.
A Secure and Efficient Framework for Multimodal Prediction Tasks in Cloud Computing with Sliding-Window Attention Mechanisms
  • Article
  • Full-text available

March 2025

·

9 Reads

Weiyuan Cui

·

Qianye Lin

·

Jiaqi Shi

·

[...]

·

Chunli Lv

An efficient and secure computation framework based on the sliding-window attention mechanism and sliding loss function was proposed to address challenges in temporal and spatial feature modeling for multimodal data processing. The framework aims to overcome the limitations of traditional methods in privacy protection, feature-capturing capabilities, and computational efficiency. The experimental results demonstrated that, in time-series data processing tasks, the proposed method achieved precision, recall, accuracy, and F1-score values of 0.95, 0.91, 0.93, and 0.93, respectively, significantly outperforming the federated learning, secure multi-party computation, homomorphic encryption, and TEE-based approaches. In spatial data processing tasks, these metrics reached 0.93, 0.90, 0.92, and 0.91, also surpassing all the comparative methods. Compared with the existing secure computation frameworks, the proposed approach substantially enhanced computational efficiency while minimizing accuracy loss, all while ensuring data privacy. These findings provide an efficient and reliable solution for privacy protection and data security in cloud computing environments. Furthermore, the research demonstrates significant theoretical value and practical potential in real-world scenarios such as financial forecasting and image analysis.

Download

Image dataset samples.
Flowchart of proposed method.
Architecture of adversarial training network framework.
Architecture of symmetric projection space extractor.
A Symmetric Projection Space and Adversarial Training Framework for Privacy-Preserving Machine Learning with Improved Computational Efficiency

March 2025

·

3 Reads

This paper proposes a data security training framework based on symmetric projection space and adversarial training, aimed at addressing the issues of privacy leakage and computational efficiency encountered by current privacy protection technologies when processing sensitive data. By designing a new projection loss function and combining autoencoders with adversarial training, the proposed method effectively balances privacy protection and model utility. Experimental results show that, for financial time-series data tasks, the model using the projection loss achieves a precision of 0.95, recall of 0.91, and accuracy of 0.93, significantly outperforming the traditional cross-entropy loss. In image data tasks, the projection loss yields a precision of 0.93, recall of 0.90, accuracy of 0.91, and mAP@50 and mAP@75 of 0.91 and 0.90, respectively, demonstrating its strong advantage in complex tasks. Furthermore, experiments on different hardware platforms (Raspberry Pi, Jetson, and NVIDIA 3080 GPU) show that the proposed method performs well on low-computation devices and exhibits significant advantages on high-performance GPUs, particularly in terms of computational efficiency, demonstrating good scalability and efficiency. The experimental results validate the superiority of the proposed method in terms of data privacy protection and computational efficiency.