On Using Secure Aggregation in Differentially Private Federated Learning with Multiple Local Steps

Published: 19 Mar 2025, Last Modified: 19 Mar 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning is a distributed learning setting where the main aim is to train machine learning models without having to share raw data but only what is required for learning. To guarantee training data privacy and high-utility models, differential privacy and secure aggregation techniques are often combined with federated learning. However, with fine-grained protection granularities, e.g., with the common sample-level protection, the currently existing techniques generally require the parties to communicate for each local optimization step, if they want to fully benefit from the secure aggregation in terms of the resulting formal privacy guarantees. In this paper, we show how a simple new analysis allows the parties to perform multiple local optimization steps while still benefiting from using secure aggregation. We show that our analysis enables higher utility models with guaranteed privacy protection under limited number of communication rounds.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Main changes introduced to address reviewer comments: * clarify contributions (Section 1) * add missing related work (Section 2) * add some omitted standard algorithms (FL with sample- and user-level DP Appendix A) * some clarifications to Backgound (Section 3) * add explicit result on how noise scales with number of clients (Thm 4.8 in Section 4) * some minor fixes Additional changes to address minor revision decision: * removed references to joint noise scaling (also in the paper title) to avoid potential confusion * added definition for infinite divisibility (current Appendix A.1), removed previous Lemma on infinitely divisible distributions (thanks to AE for asking about this!), instead changed Def. 4.2 to include the notion of proper sum-domination and rewrote related discussion (no changes to main proofs, as they do not assume infinite divisibility) * clarified notation in Appendix (privacy unit notation in Alg.2) * some minor fixes for clarity, notational consistency, typos, etc. * added link to external code on GitHub (to be uploaded after some cleaning)
Code: https://github.com/mixheikk/DPFL-with-SecAgg-paper
Assigned Action Editor: ~Aurélien_Bellet1
Submission Number: 3680
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview