Black-box Attacks on Deep Neural Networks via Gradient EstimationDownload PDF

12 Feb 2018, 16:45 (modified: 04 Jun 2018, 15:01)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: adversarial examples, black-box, real-world attacks
TL;DR: Efficient query-based black-box attacks on neural networks
Abstract: In this paper, we propose novel Gradient Estimation black-box attacks to generate adversarial examples with query access to the target model's class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial example from the dimensionality of the input. An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs. We show that the proposed Gradient Estimation attacks outperform all other black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving attack success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai.
4 Replies

Loading