Unsupervised Membership Inference Attacks Against Machine Learning ModelsDownload PDF

Published: 04 Nov 2021, Last Modified: 02 Nov 2023PRIML 2021 PosterReaders: Everyone
Keywords: Data Privacy, Membership Inference, Machine Learning, Temperature Scaling
TL;DR: The paper presents a novel membership inference attack, which achieves similar performance to the SOTA approach but has fewer requirements and in a more time-efficient way.
Abstract: As a form of privacy leakage for machine learning (ML), membership inference (MI) attacks aim to infer whether given data samples have been used to train a target ML model. Existing state-of-the-art MI attacks in black-box settings adopt a so-called shadow model to perform transfer attacks. Such attacks achieve high inference accuracy but have many adversarial assumptions, such as having a dataset from the same distribution as the target model’s training data and knowledge of the target model structure. We propose a novel MI attack, called UMIA, which probes the target model in an unsupervised way without any shadow model. We relax all the adversarial assumptions above, demonstrating that MI attacks are applicable without any knowledge about the target model and its training set. We empirically show that, with far fewer adversarial assumptions and computational resources, UMIA can perform on bar with the state-of-the-art supervised MI attack.
Paper Under Submission: The paper is NOT under submission at NeurIPS
1 Reply