Mitigating Statistical Bias within Differentially Private Synthetic DataDownload PDF

Published: 20 May 2022, Last Modified: 20 Oct 2024UAI 2022 OralReaders: Everyone
Keywords: Differential Privacy, Generative Adversarial Networks, Importance Sampling
TL;DR: We investigate how importance weighting can be used to improve the performance of downstream predictors estimated on differentially private data.
Abstract: Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data. However, mechanisms of privacy preservation can significantly reduce the utility of synthetic data, which in turn impacts downstream tasks such as learning predictive models or inference. We propose several re-weighting strategies using privatised likelihood ratios that not only mitigate statistical bias of downstream estimators but also have general applicability to differentially private generative models. Through large-scale empirical evaluation, we show that private importance weighting provides simple and effective privacy-compliant augmentation for general applications of synthetic data.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2108.10934/code)
5 Replies

Loading