Differentially Private Language Models Benefit from Public Pre-trainingDownload PDF

14 May 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Language modeling is a keystone task in natu- ral language processing. When training a lan- guage model on sensitive information, differ- ential privacy (DP) allows us to quantify the degree to which our private data is protected. However, training algorithms which enforce differential privacy often lead to degradation in model quality. We study the feasibility of learning a language model which is simultane- ously high-quality and privacy preserving by tuning a public base model on a private cor- pus. We find that DP fine-tuning boosts the performance of language models in the private domain, making the training of such models possible.
0 Replies

Loading