SLPerf: A Research Library and Benchmark Framework for Split Learning

Published: 01 Jan 2025, Last Modified: 14 Nov 2025ICDEW 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Data privacy concerns have rendered centralized training of deep learning models infeasible, as training data is scattered across silos. This leads to the necessity for cross-silo collaborative learning frameworks, such as Federated Learning (FL). Split Learning (SL) is a variant of FL that divides a deep neural network into several parts and trains them collaboratively, which is specifically designed for the scenario in which client devices are resource-constrained. Although there have been well-established FL libraries and benchmark frameworks, a comprehensive research library for SL is still lacking. Due to the diversity of SL paradigms in terms of label sharing, model aggregation, and cut layer choice, the lack of such a library makes it difficult to compare these SL paradigms. Therefore, we propose SLPerf, an open-source research library and benchmark framework for SL. We implement several mainstream SL paradigms with the SLPerf interface and conduct experiments to evaluate them using the SLPerf benchmark. An empirical comparison of SL paradigms provides insight into their practical performance. Our code is publicly available at https://github.com/Rainysponge/SLPerf.
Loading