Efficient Data Selection for Split Neural Networks

Published: 22 Sept 2025, Last Modified: 22 Sept 2025WiML @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Split Learning, Distributed Machine Learning
Abstract: Split Neural Networks (SplitNN) offer a great potential for distributed training of deep learning models across resource-constrained devices. However, severe computation and communication requirement restricts its practicality in scenarios with large number of participating clients and big-sized local datasets. While typical subset-selection techniques i.e. active learning and core-set selection can potentially address these constraints, such approaches are impractical for SplitNN. In this paper, we propose a new framework for SplitNN to facilitate the existing subset-selection techniques. The proposed framework uses auxiliary networks with client-side models to generate pseudo-predictions on the local dataset and hence compute informative measures for subset-selection locally. Extensive experimental results show the effectiveness of the proposed framework which substantially reduces computation and communication requirements while preserving the generalization performance.
Submission Number: 392
Loading