Membership Inference Attacks against MemGuard : IEEE CNS 20 PosterDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 15 May 2023CNS 2020Readers: Everyone
Abstract: The membership inference attack makes it possible to extract participation information of the training dataset, which poses severe privacy threats to users. Most of existing solutions leverage differential privacy or regularization when training the target classifier, causing some accuracy drops. Recently, MemGuard, a defense with adding noise to each confidence score vector predicted by the target classifier has been proposed, instead of tampering the training process of the target classifier. The noise can turn score vector into an adversarial example that misleads the attacker's classifier. In this poster, we propose two novel attacks to foil the protection of MemGuard. We apply the knowledge distillation, which can improve the resilience to adversarial examples of our attack model, as our first attack. We also propose a resizing attack to denoise the adversarial example, as our second attack. The experimental results show the effectiveness of our two proposed attacks.
0 Replies

Loading