Differential Privacy for Transformer Embeddings with Nonparametric Variational Information Bottleneck
Keywords: Nonparametric Variational Information Bottleneck, Rényi Differential Privacy, Bayesian Differential Privacy, Transformers, Differential Privacy
TL;DR: We show that nonparametric variational information bottleneck is effective at calibrating noise for sharing transformer embeddings with differential privacy.
Abstract: We propose a privacy-preserving method for sharing text data by sharing noisy versions of their transformer embeddings.
It has been shown that hidden representations learned by deep models can encode sensitive information from the input, making it possible for adversaries to recover the input data with considerable accuracy. This problem is exacerbated in transformer embeddings because they consist of multiple vectors, one per token. To mitigate this risk, we propose Nonparametric Variational Differential Privacy (NVDP), which ensures both useful data sharing and strong privacy protection. We take a differential privacy approach, integrating a Nonparametric Variational Information Bottleneck (NVIB) layer into the transformer architecture to inject noise into its multi-vector embeddings and thereby hide information, and measuring privacy protection with Rényi divergence and its corresponding Bayesian Differential Privacy (BDP) guarantee. Training the NVIB layer calibrates the noise level according to utility. We test NVDP on the GLUE benchmark and show that varying the noise level gives us a useful tradeoff between privacy and accuracy. With lower noise levels, our model maintains high accuracy while offering strong privacy guarantees, effectively balancing privacy and utility.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16879
Loading