Label-Only Membership Inference Attacks and Defenses in Semantic Segmentation ModelsDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 12 May 2023IEEE Trans. Dependable Secur. Comput. 2023Readers: Everyone
Abstract: Recent research has discovered that deep learning models are vulnerable to membership inference attacks, which can reveal whether a sample is in the training dataset of the victim model or not. Most membership inference attacks rely on confidence scores from the victim model for the attack purpose. However, a few studies indicate that prediction labels of the victim model's output are sufficient for launching successful attacks. Besides the well-studied classification models, segmentation models are also vulnerable to this type of attack. In this article, for the first time, we propose the label-only membership inference attacks against semantic segmentation models. With a well-designed framework of the attacks, we can achieve a considerably higher successful attacking rate compared to previous work. In addition, we have discussed several possible defense mechanisms to counter such a threat.
0 Replies

Loading