Non-asymptotic Analysis of Stochastic Gradient Descent under Local Differential Privacy Guarantee

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Differential Privacy; Stochastic Gradient Descent; Non-asymptotic Analysis
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We provide a comprehensive non-asymptotic analysis of the convergence of the DP-SGD algorithm, where individual users retain the autonomy to specify their differential privacy budgets.
Abstract: In private machine learning algorithms, Differentially Private Stochastic Gradient Descent (DP-SGD) plays an important role. Despite this, there have been few studies that have explored the theoretical analysis that can be derived from DP-SGD, particularly in a more challenging scenario where individual users retain the autonomy to specify their differential privacy budgets. In this work, we conduct a comprehensive non-asymptotic analysis of the convergence of the DP-SGD algorithm as well as its variants. This will allow individual users to assign different privacy guarantees when releasing models trained by DP-SGD. Most importantly, we provide readers with practical guidelines regarding the effect of various hyperparameters, such as step size, parameter dimensions, and privacy budgets, on convergence rates. The problem we consider includes the most commonly used loss functions in standard machine learning algorithms. For strongly convex loss functions, we establish an upper bound on the expected distance between the estimators and the global optimum. In the case of non-strongly convex functions, we analyze the upper bound difference between the loss incurred by the estimators and the optimal loss. Our proposed estimators are validated in the theoretical and practical realms by rigorous mathematical derivation and numerous numerical tests.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6671
Loading