Abstract: We consider the lower bounds of differentially private empirical risk minimization (DP-ERM) for convex functions in both constrained and unconstrained cases concerning the general $\ell_p$ norm beyond the $\ell_2$ norm considered by most of the previous works.
We provide a simple black-box reduction approach that can generalize lower bounds in constrained to unconstrained cases.
Moreover, for $(\epsilon,\delta)$-DP, we achieve the optimal $\Omega(\frac{\sqrt{d \log(1/\delta)}}{\epsilon n})$ lower bounds for both constrained and unconstrained cases and any $\ell_p$ geometry where $p\geq 1$ by considering $\ell_1$ loss over the $\ell_{\infty}$ ball.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
7 Replies
Loading