Provable Membership Inference PrivacyDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023TSRML2022Readers: Everyone
Keywords: privacy, membership inference, re-identification
TL;DR: We propose a novel privacy notion with easily interpretable guarantees and a practical algorithm for achieving it on many machine learning tasks. We precisely characterize the relationship with differential privacy and simulations confirm our theory.
Abstract: In applications involving sensitive data, such as finance and healthcare, the necessity for preserving data privacy can be a significant barrier to machine learning model development. Differential privacy (DP) has emerged as one canonical standard for provable privacy. However, DP's strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning; and DP guarantees themselves can be difficult to interpret. In this work, we propose a novel privacy notion, membership inference privacy (MIP), to address these challenges. We give a precise characterization of the relationship between MIP and DP, and show that MIP can be achieved using less randomness compared to the amount required for guaranteeing DP, leading to smaller drop in utility. MIP also guarantees are easily interpretable in terms of the success rate of membership inference attacks. Our theoretical results also give rise to a simple algorithm for guaranteeing MIP which can be used as a wrapper around any algorithm with a continuous output, including parametric model training.
3 Replies