January 2025
Federated learning (FL) has recently gained tremendous attention in edge computing and the Internet of Things, due to its capability of enabling clients to perform model training at the network edge or end devices ( i.e. , clients). However, these end devices are usually resource-constrained without the ability to train large-scale models. In order to accelerate the training of large-scale models on these devices, we incorporate Split Learning (SL) into Federated Learning (FL), and propose a novel FL framework, termed PairingFL . Specifically, we split a full model into a bottom model and a top model, and arrange participating clients into pairs, each of which collaboratively trains the two partial models as one client does in typical FL. Driven by the advantages of SL and FL, PairingFL is able to relax the computation burden on clients and protect model privacy. However, considering the features of system and statistical heterogeneity in edge networks, it is challenging to pair the clients by carefully developing the strategies of client partitioning and matching for efficient model training. To this end, we first theoretically analyze the convergence property of PairingFL, and obtain a convergence upper bound. Guided by this, we then design a greedy and efficient algorithm, which makes the joint decision of client partitioning and matching, so as to well balance the trade-off between convergence rate and model accuracy. The performance of PairingFL is evaluated through extensive simulation experiments. The experimental results demonstrate that PairingFL can speed up the training process by compared to baselines when achieving the corresponding convergence accuracy.