SAGE: Steering the Adversarial Generation of Examples With AccelerationsDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 10 May 2023IEEE Trans. Inf. Forensics Secur. 2023Readers: Everyone
Abstract: To generate image adversarial examples, state-of-the-art black-box attacks usually require thousands of queries. However, massive queries will introduce additional costs and exposure risks in the real world. Towards improving the attack efficiency, we carefully design an acceleration framework SAGE for existing black-box methods, which is composed of sLocator (initial point optimization) and sRudder (search process optimization). The core idea of SAGE in terms of 1) saliency map can guide the perturbations towards the most adversarial direction and 2) exploiting bounding box (bbox) to capture those salient pixels in the black-box attack. Meanwhile, we provide a series of observations and experiments that demonstrate bbox holds model invariance and process invariance. We extensively evaluate SAGE on four state-of-the-art black-box attacks involving three popular datasets (MNIST, CIFAR10, and ImageNet). The results show that SAGE could present fundamental improvements even against robust models that use adversarial training. Specifically, SAGE could reduce >20% of queries and improve the success rate of attacks to 95%~100%. Compared with the other acceleration framework, SAGE fulfills the more significant effect in a flexible, stable, and low-overhead manner. Moreover, our practical evaluation (Google Cloud Vision API) shows SAGE can be applied to real-world scenarios.
0 Replies

Loading