Privacy Auditing of Machine Learning using Membership Inference AttacksDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Abstract: Membership inference attacks determine if a given data point is used for training a target model. Thus, this attack could be used as an auditing tool to quantify the private information that a model leaks about the individual data points in its training set. In the last five years, a variety of membership inference attacks against machine learning models are proposed, where each attack exploits a slightly different clue. Also, the attacks are designed under different implicit assumptions about the uncertainties that an attacker has to resolve. Thus attack success rates do not precisely capture the information leakage of models about their data, as they also reflect other uncertainties that the attack algorithm has (for example, about data distribution or characteristics of the target model). In this paper, we present a framework that can explain the implicit assumptions and also the simplifications made in the prior work. We also derive new attack algorithms from our framework that can achieve a high AUC score while also highlighting the different factors that affect their performance. Thus, our algorithms can be used as a tool to perform an accurate and informed estimation of privacy risk in machine learning models. We provide a thorough empirical evaluation of our attack strategies on various machine learning tasks trained on benchmark datasets.
14 Replies

Loading