Xiulang Jin’s research while affiliated with Huawei Technologies and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


SIREN + : Robust Federated Learning with Proactive Alarming and Differential Privacy
  • Article

September 2024

·

11 Reads

·

2 Citations

IEEE Transactions on Dependable and Secure Computing

·

Hao Wang

·

·

[...]

·

Haibing Guan

Federated learning (FL), an emerging machine learning paradigm that trains a global model across distributed clients without violating data privacy, has recently attracted significant attention. However, FL?s distributed nature and iterative training extensively increase the attacking surface for Byzantine and inference attacks. Existing FL defense methods can hardly protect FL from both Byzantine and inference attacks due to their fundamental conflicts. The noise injected to defend against inference attacks interferes with model weights and training data, obscuring model analysis that Byzantine-robust methods utilize to detect attacks. Besides, the practicability of existing Byzantine-robust methods is limited since they heavily rely on model analysis. In this paper, we present SIREN + , a new robust FL system that defends against a wide spectrum of Byzantine attacks and inference attacks by jointly utilizing a proactive alarming mechanism and local differential privacy (LDP). The proactive alarming mechanism orchestrates clients and the FL server to collaboratively detect attacks using distributed alarms, which is free from the noise interference injected by LDP. Compared with the state-of-the-art defense methods, SIREN + can protect FL from Byzantine and inference attacks from a higher proportion of malicious clients in the system while keeping the global model performing normally. Extensive experiments with diverse settings and attacks on real-world datasets show that SIREN + outperforms existing defense methods when attacked by Byzantine and inference attacks.



Siren: Byzantine-robust Federated Learning via Proactive Alarming

November 2021

·

81 Reads

·

49 Citations

With the popularity of machine learning on many applications, data privacy has become a severe issue when machine learning is applied in the real world. Federated learning (FL), an emerging paradigm in machine learning, aims to train a centralized model while distributing training data among a large number of clients in order to avoid data privacy leaking, which has attracted great attention recently. However, the distributed training scheme in FL is susceptible to different kinds of attacks. Existing defense systems mainly utilize model weight analysis to identify malicious clients with many limitations. For example, some defense systems must know the exact number of malicious clients before- hand, which can be easily bypassed by well-designed attack methods and become impractical for real-world scenarios. This paper presents Siren, a Byzantine-robust federated learning system via a proactive alarming mechanism. Compared with current Byzantine-robust aggregation rules, Siren can defend against attacks from a higher proportion of malicious clients in the system while keeping the global model performing normally. Extensive experiments against different attack methods are conducted under diverse settings on both independent and identically distributed (IID) and non-IID data. The experimental results illustrate the effectiveness of Siren compared with several state-of-the-art defense methods.

Citations (3)


... Statistical parameter aggregation [26], [27], [28], [29] modifies the standard aggregation process of FL, granting different weights to different clients based on their model update's statistical features. Clientdominant detection [30], [31], [32], [33], [34] as a rising category leverages clients to maintain the integrity of FL training. While other advanced metrics and pipelines [35], [36], [37], [38], [39] use other advanced features or pipelines to perform the detection. ...

Reference:

Poisoning with A Pill: Circumventing Detection in Federated Learning
SIREN + : Robust Federated Learning with Proactive Alarming and Differential Privacy
  • Citing Article
  • September 2024

IEEE Transactions on Dependable and Secure Computing

... Some of them are general-purpose, while others focus on specific classes of algorithms, scenarios, topologies or domains of use. For instance, TensorFlow Federated [5], and PySyft [6]) focus on deep learning, Felicitas [7], and FairFed [8] are frameworks developed with a focus on the cross-device scenario, while Substra [9] implements the decentralized topology. Among general purpose frameworks we can mention Flower [10], FedML [11], and FATE [12], while FeatureCloud [13], and Sfkit [14] are examples of frameworks specifically developed for the biomedical sector that come with user-friendly web interfaces, allowing for practical creation and management of FL consortia. ...

Felicitas: Federated Learning in Distributed Cross Device Collaborative Frameworks
  • Citing Conference Paper
  • August 2022

... Furthermore, the act of gathering users' personal information presents difficulties as a result of concerns around privacy. The federated learning framework enables distributed devices to collaborate on training cross-domain recommendation system models using locally stored raw data on distributed devices [19][20][21]. Federated cross-domain recommendation systems do not require transferring data from distributed devices to a central server; training is performed on distributed devices, and only the updates to the cross-domain recommendation model are exchanged over the network. Therefore, federated learning can achieve more secure model training. ...

Siren: Byzantine-robust Federated Learning via Proactive Alarming
  • Citing Conference Paper
  • November 2021