Adversarial Sample Detection via Channel PruningDownload PDF

Jun 18, 2021 (edited Jun 22, 2021)ICML 2021 Workshop AML PosterReaders: Everyone
  • Keywords: adversarial sample detection, neural networks, model pruning
  • TL;DR: Use pruning models for adversarial sample detection.
  • Abstract: Adversarial attacks are the main security issue of deep neural networks. Detecting adversarial samples is an effective mechanism for defending adversarial attacks. Previous works on detecting adversarial samples show superior in accuracy but consume too much memory and computing resources. In this paper, we propose an adversarial sample detection method based on pruned models. We find that pruned neural network models are sensitive to adversarial samples, i.e., the pruned models tend to output labels different from the original model when given adversarial samples. Moreover, the channel pruned model has an extremely small model size and actual computational cost. Experiments on CIFAR10 and SVHN show that the FLOPs and size of our generated model are only 24.46\% and 4.86\% of the original model. It outperforms the SOTA multi-model based detection method (87.47\% and 63.00\%) by 5.29\% and 30.92\% on CIFAR10 and SVHN, respectively, with significantly fewer models used.
2 Replies

Loading