HSFL: An Efficient Split Federated Learning Framework via Hierarchical OrganizationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 10 May 2023CNSM 2022Readers: Everyone
Abstract: Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.
0 Replies

Loading