SeqPATE: Differentially Private Text Generation via Knowledge DistillationDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Natural Language Generation, Text Generation, Privacy Protection
Abstract: Protecting the privacy of user data is crucial when training neural text generation models, which may leak sensitive user information during generation. Differentially private (DP) learning algorithms provide guarantees on identifying the existence of a training sample from model outputs. PATE is a DP learning algorithm that fits the large model well, such as GPT. In this paper, we propose SeqPATE that adapts PATE to text generation while satisfying DP. There are two key challenges in adapting PATE to text generation: (i) obtaining sequence-level supervision for text generation, and (ii) reducing noise required to protect privacy given the large output space (i.e. vocabulary size). For (i), we generate pseudo input and reduce the sequence generation problem to the next word prediction. For (ii), we reduce the output space with top-$k$ and top-$p$ selection strategy that dynamically filters the candidate words; and we refine the teacher aggregation mechanism of PATE to avoid the low agreement rates due to voting over the large output space. To limit the privacy loss, we design an efficient knowledge distillation to reduce the time of distilling from the private data. We apply SeqPATE to a simple text generation task (sentence completion) and achieves 39\% and 28\% gains in Bleu4 on two datasets.
One-sentence Summary: SeqPATE: Differentially Private Text Generation via Knowledge Distillation
Supplementary Material: zip
27 Replies

Loading