Self Check-in: Tight Privacy Amplification for Practical Distributed LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: differential privacy, federated learning, privacy amplification
TL;DR: A more practical/realistic protocol of differentially private federated learning, with emphasis given to the privacy analysis
Abstract: Recent studies of distributed computation with formal privacy guarantees, such as differentially private (DP) federated learning, leverage random sampling of clients in each round (privacy amplification by subsampling) to achieve satisfactory levels of privacy. Achieving this however requires precise and uniform subsampling of clients as well as a highly trusted ochestrating server, strong assumptions which may not hold in practice. In this paper, we explore a more practical protocol, self check-in, to resolve the aforementioned issues. The protocol relies on client making independent and random decision to participate in the computation, freeing the requirement of server-initiated subsampling, and enabling robust modelling of client dropouts. Our protocol has immediate application to employing intermediate trust models, i.e., shuffle and distributed DP models, for realizing distributed learning in practice. To this end, we present a novel analysis based on R{\'e}nyi differential privacy (RDP) that improves in privacy guarantee over those using approximate DP's strong composition at various parameter regimes for self check-in. We also provide a numerical approach to track the privacy of generic shuffling mechanism including distributed learning with Gaussian mechanism, which can be of independent interest as it is the first evaluation of a generic mechanism as far as we know within the local/shuffle model under the distributed setting in the literature. Empirical studies are given to demonstrate the efficacy of learning as well.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
Supplementary Material: zip
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview