FedSRC: Federated Learning with Self-Regulating Clients

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Federated learning, efficiency, computation and communication savings, client-side control
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Our goal is to minimize computation and communication costs of federated training for participating clients while improving the performance.
Abstract: Federated Learning (FL) has emerged as a prominent privacy-preserving decentralized paradigm for collaborative machine learning across many devices. However, FL suffers from performance degradation in the global model due to heterogeneity in clients' locally generated data. Some prior studies address this issue by limiting or even discarding certain clients' contributions to the global model, resulting in unnecessary computation and communication for the discarded clients. Alternatively, selectively choosing clients to participate in FL may avoid such resource waste. However, such active client selection requires client-level profiling that violates privacy. In this paper, we present a novel FL approach, called FedSRC: Federated Learning with Self-Regulating Clients, that can save clients' resources while preserving their anonymity. In FedSRC, clients can determine if their local training is favorable to the global model and whether they should participate in an FL round using a lightweight checkpoint based on their test loss on the global model. Through comprehensive evaluations using four datasets, we show that FedSRC can improve global model performance, all the while reducing communication costs by up to 30\% and computation costs by 55\%.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4567
Loading