Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: optimization, differential privacy, non-convex, non-smooth
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: We introduce a new zeroth-order algorithm for private stochastic optimization on nonconvex and nonsmooth objectives.
Given a dataset of size $M$, our algorithm ensures $(\alpha,\alpha\rho^2/2)$-Renyi differential privacy and finds a $(\delta,\epsilon)$-stationary point so long as $M=\tilde\Omega(\frac{d}{\delta\epsilon^3} + \frac{d^{3/2}}{\rho\delta\epsilon^2})$.
This matches the optimal complexity found in its non-private zeroth-order analog.
Notably, although the objective is not smooth, we have privacy ``for free'' when $\rho \ge \sqrt{d}\epsilon$.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Primary Area: optimization
Submission Number: 6503
Loading