Buffered Asynchronous Federated Learning with Local Differential Privacy

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Asynchronous Federated Learning, Differential Privacy
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Federated Learning (FL) allows multiple parties to collaboratively train a machine learning (ML) model without having to disclose their training data. Clients train their own models locally and share only model updates with an aggregation server. The first FL deployments have been in synchronous settings, with all clients performing training and sharing model updates simultaneously. More recently, {\em Asynchronous FL} (Async-FL) has emerged as a new approach that allows clients to train at their own pace and send/receive updates when they are ready. While FL is inherently less privacy-invasive than alternative centralized ML approaches, (aggregate) model updates can still leak sensitive information about clients' data. Therefore, FL algorithms need to satisfy Differential Privacy (DP) to provably limit leakage. Alas, previous work on Async-FL has only considered Central DP, which requires trust in the server, and thus may not always be viable. In this paper, we present the first technique that satisfies {\em Local DP} (LDP) in the context of the state-of-the-art aggregation algorithm for Async-FL, namely, FedBuff. We experimentally demonstrate on three benchmark FL datasets that our LDP technique performs equally well and, in some cases, better than FedBuff with Central DP. Finally, we study how the {\em staleness} of the model updates received by the asynchronous FL clients can be used to improve utility while preserving privacy under different attack setups.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3887
Loading