Bounding Membership InferenceDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: differential privacy, membership inference
Abstract: Differential Privacy (DP) is the de facto standard for reasoning about the privacy guarantees of a training algorithm. Despite the empirical observation that DP reduces the vulnerability of models to existing membership inference (MI) attacks, a theoretical underpinning as to why this is the case is largely missing in the literature. In practice, this means that models need to be trained with differential privacy guarantees that greatly decrease their accuracy. In this paper, we provide a tighter bound on the accuracy of any membership inference adversary when a training algorithm provides $\epsilon$-DP. Our bound informs the design of a novel privacy amplification scheme, where an effective training set is sub-sampled from a larger set prior to the beginning of training, to greatly reduce the bound on MI accuracy. As a result, our scheme enables $\epsilon$-DP users to employ looser differential privacy guarantees when training their model to limit the success of any MI adversary; this, in turn, ensures that the model's accuracy is less impacted by the privacy guarantee. Finally, we discuss the implications of our MI bound on machine unlearning.
One-sentence Summary: We bound the accuracy of any membership inference adversary, obtaining a rigorous metric which can be used in conjunction with differential privacy guarantees to understand the privacy of a training algorithm.
20 Replies

Loading