Adaptive Differential Privacy for Language Model TrainingDownload PDF

Published: 27 Mar 2022, Last Modified: 05 May 2023FL4NLP@ACL2022Readers: Everyone
Keywords: differential privacy, language model
TL;DR: A new differential privacy approach to training large language models.
Abstract: Although differential privacy (DP) can protect language models from leaking privacy, its indiscriminative protection on all data points reduces its practical utility. Previous works improve DP training by discriminating privacy and non-privacy data. But these works rely on datasets with prior privacy information, which is not available in real-world scenarios. In this paper, we propose an Adaptive Differential Privacy (ADP) framework for language modeling without resorting to prior privacy information. We estimate the probability that a linguistic item contains privacy based on a language model. We further propose a new Adam algorithm that adjusts the degree of differential privacy noise injected to the language model according to the estimated privacy probabilities. Experiments demonstrate that our ADP improves differentially private language modeling to achieve good protection from canary attackers.
5 Replies

Loading