Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning

Published: 01 Jan 2022, Last Modified: 14 May 2024Federated Learning 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Resource constraints, workload overheads, lack of trust, and competition hinder the sharing of raw data across multiple institutions. This leads to a shortage of data for training state-of-the-art deep learning models. Split Learning is a model and data parallel approach of distributed machine learning, which is a highly resource efficient solution to overcome these problems. Split Learning works by partitioning conventional deep learning model architectures such that some of the layers in the network are private to the client and the rest are centrally shared at the server. This allows for training of distributed machine learning models without any sharing of raw data while reducing the amount of computation or communication required by any client. The paradigm of split learning comes in several variants depending on the specific problem being considered at hand. In this chapter we share theoretical, empirical, and practical aspects of performing split learning and some of its variants that can be chosen depending on the application of your choice.
Loading