Keywords: differential privacy, empirical risk minimization, stochastic convex optimization, Langevin diffusion
TL;DR: We show that Langevin diffusions are universal algorithms for private optimization: they provide optimal results for DP-ERM and DP-SCO, under pure and approximate DP, for both convex and strongly convex losses.
Abstract: In this paper, we revisit the problem of differentially private empirical risk minimization (DP-ERM) and differentially private stochastic convex optimization (DP-SCO). We show that a well-studied continuous time algorithm from statistical physics, called Langevin diffusion (LD), simultaneously provides optimal privacy/utility trade-offs for both DP-ERM and DP-SCO, under $\epsilon$-DP, and $(\epsilon,\delta)$-DP both for convex and strongly convex loss functions. We provide new time and dimension independent uniform stability properties of LD, with which we provide the corresponding optimal excess population risk guarantees for $\epsilon$-DP. An important attribute of our DP-SCO guarantees for $\epsilon$-DP is that they match the non-private optimal bounds as $\epsilon\to\infty$.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Optimization (eg, convex and non-convex optimization)
12 Replies
Loading