Membership Inference Attack in Face of Data TransformationsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Membership inference attack, Data transformation, Data privacy
Abstract: Membership inference attacks (MIAs) on machine learning models, which try to infer whether an example is in the training dataset of a target model, are widely studied in recent years as data privacy attracts increasing attention. One unignorable problem in the traditional MIA threat model is that it assumes the attacker can obtain exactly the same example as in the training dataset. In reality, however, the attacker is more likely to collect only a transformed version of the original example. For instance, the attacker may download a down-scaled image from a website, while the smaller image has the same content as the original image used for model training. Generally, after transformations that would not affect its semantics, a transformed training member should still be treated the same as the original one regarding privacy leakage. In this paper, we propose extending the concept of MIAs into more realistic scenarios by considering data transformations and derive two MIAs for transformed examples: one follows the efficient loss-thresholding ideas, and the other tries to approximately reverse the transformations. We demonstrated the effectiveness of our attacks by extensive evaluations on multiple common data transformations and comparison with other state-of-the-art attacks. Moreover, we also studied the coverage difference between our two attacks to show their limitations and advantages.
One-sentence Summary: We broaden the scope of membership inference attacks by considering the privacy risks in data with transformations.
Supplementary Material: zip
9 Replies

Loading