Sensitivity-Aware Differentially Private Decentralized Learning with Adaptive Noise

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: decentralized learning, differential privacy, adaptive noise, time-varying topology
TL;DR: This paper proposes a novel differentially private decentralized learning method with adaptive DP Gaussian noise and achieves a utility bound matching the sever-client distributed counterpart without relying on the bounded gradient assumption.
Abstract: Most existing decentralized learning methods with differential privacy (DP) employ fixed-level Gaussian noise during training, regardless of gradient convergence, which compromises model accuracy without providing additional privacy benefits. In this paper, we propose a novel $\underline{\text{D}}$ifferentially $\underline{\text{P}}$rivate $\underline{\text{D}}$ecentralized learning approach, termed AdaD$^2$P, which employs $\underline{\text{Ada}}$ptive noise leveraging the real-time estimation of sensitivity for local updates based on gradient norms and works for time-varying communication topologies. Compared with existing solutions, the integration of adaptive noise enables us to enhance model accuracy while preserving the $(\epsilon,\delta)$-privacy budget. We prove that AdaD$^2$P achieves a utility bound of $\mathcal{O}\left( \sqrt{d\log \left( \frac{1}{\delta} \right)}/(\sqrt{n}J\epsilon) \right)$, where $J$ and $n$ are the number of local samples and nodes, respectively, and $d$ the dimension of decision variable; this bound matches that of the distributed counterparts with server-client structures, without relying on the stringent bounded gradient assumption commonly used in previous works. Theoretical analysis reveals the inherent advantages of AdaD$^2$P employing adaptive noise as opposed to constant noise. Extensive experiments on two benchmark datasets demonstrate the superiority of AdaD$^2$P over its counterparts, especially under a strong level of privacy guarantee.
Supplementary Material: pdf
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9180
Loading