UAE-PUPET: An Uncertainty-Autoencoder-Based Privacy and Utility Preserving End-to-End TransformationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: privacy, privacy utility tradeoff, autoencoders, deep learning, machine learning, adversary, privacy utility metric, gan, game theory, min-max, variational autoencoder
Abstract: We propose a new framework that deals with the privacy-utility tradeoff problem under two centralized settings: a dynamic setting and a constant setting. The dynamic setting corresponds to the min-max two-player game whereas the constant setting corresponds to a generator which tries to outperform an adversary already trained using ground truth data. In both settings, we use the same architecture consisting of a generator and a discriminator, where the generator consists of an encoder-decoder pair, and the discriminator consists of an adversary and a utility provider. Unlike previous research considering this kind of architecture, which leverage variational autoencoders (VAEs) based on learning a latent representation which is forced into a Gaussian assumption, our proposed technique removes the Gaussian assumption restriction on the latent variables, and only focuses on the end-to-end stochastic mapping of the input to privatized data. We also show that testing the privacy mechanism against a single adversary is usually not sufficient to capture the leakage of private information, as better adversaries can always be created by training under different conditions. Therefore, we test our proposed mechanism under five different types of adversary models. To compare privacy mechanisms under a fair framework, we propose a new metric called the Utility-Privacy Tradeoff (UPT) curve, obtained by using the upper convex hull of the utility-privacy tradeoff operation points achievable under the most powerful of the five adversary models. Finally, we test our framework on four different datasets: MNIST, Fashion MNIST, UCI Adult and US Census Demographic Data, providing a wide range of possible private and utility attributes. Through comparative analysis, our results show better privacy and utility guarantees, under our more rigorous adversary model, than the existing works, even when the latter are considered under their original restrictive single-adversary models.
One-sentence Summary: A new framework that deals with privacy utility tradeoff problem and guarantees privacy leakage against rigorous adversaries and proposes a new metric called Utility Privacy Tradeoff (UPT) curve.
4 Replies

Loading