Keywords: large language models, differential privacy, zeroth-order optimization
Abstract: Differential privacy is a framework for mitigating privacy risks by enforcing algorithmic stability. DP-SGD allows models to be trained in a privacy-preserving manner, but raises new obstacles in the form of performance loss and significant engineering challenges. We introduce DP-ZO, a new method for fine-tuning large language models that preserves the privacy of training data by privatizing zeroth-order optimization. A key insight into the design of our method is that the direction of the gradient in the zeroth-order optimization we use is random and the only information from training data is the step size, i.e., a scalar. Therefore, we only need to privatize the scalar step size, which is memory-efficient. DP-ZO, which can be instantiated with either Laplace or Gaussian noise, provides a strong privacy-utility trade off across different tasks, and model sizes, under conservative privacy budgets.
Submission Number: 17
Loading