Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning

19 Sept 2023 (modified: 27 Jan 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Federated Learning, Differential Privacy, Differentially Private Federated Learning
TL;DR: We propose a noise-aware aggregation strategy for heterogeneous DPFL systems with untrusted servers to improve utility and convergence speed.
Abstract: Federated Learning (FL) is a useful paradigm for learning models from the data distributed among some clients. High utility and rigorous data privacy guaranties are among the main goals of an FL system. Previous works have tried to achieve the latter by ensuring differential privacy (DP) while performing federated training. In real systems, there is often heterogeneity in the privacy requirements of various clients, and the existing DPFL works either assume uniform requirements or propose methods relying on a trusted server. Furthermore, in real FL systems, there is also heterogeneity in memory/computing power across clients’ devices, which has not been addressed in existing DPFL algorithms. Having these two sources of heterogeneity, straightforward solutions, e.g., meeting the privacy requirements of the most privacy-sensitive client or removing the clients with low memory budgets will lead to lower utility and fairness problems, due to high DP noise and/or data loss. In this work, we propose Robust-HDP to achieve high utility in the presence of an untrusted server, while addressing both the privacy and memory heterogeneity across clients. Our main idea is to efficiently estimate the noise in each client model update and assign their aggregation weights accordingly. Noise-aware aggregation of Robust-HDP without sharing clients privacy preferences with the server, results in the improvement of utility, privacy and convergence speed, while meeting the heterogeneous privacy/memory requirements of all clients. Extensive experimental results on multiple benchmark datasets and our convergence analysis confirm the effectiveness of Robust-HDP in improving system utility and convergence speed
Supplementary Material: zip
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1866
Loading