Local Differential Privacy for Privacy-Preserving NLP TasksDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: In this paper, we propose a Local Differentially Private Natural Language Processing (LDP-NLP) model that protects the privacy of user input sentences for both training and inference stages while requiring no server security trust. Compared to existing methods, the novel privacy-preserving methodology significantly reduces calibrated noise power and thus improves model accuracy by incorporating (a) an LDP-layer, (b) sub-sampling and up-sampling DP amplification algorithms for training and inference, and (c) DP composition algorithms for noise calibration. This novel LDP-NLP solution guarantees privacy for the entire training/inference data for the first time, whereas current methods can only guarantee privacy for either a single training/inference step. Furthermore, the total privacy cost is reduced to a reasonable range, i.e., less than 10, for the first time with an accuracy loss of only 2-5\% compared to the accuracy upper bound produced by the original model without privacy guarantee.
0 Replies

Loading