GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks

Published: 01 Jan 2024, Last Modified: 03 Oct 2025IEEE Trans. Consumer Electron. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper focuses on the transferability problem of adversarial examples towards black-box attack scenarios wherein model information such as the neural network structure is unavailable. To tackle this predicament, we propose a new adversarial example-generating scheme through bridging a data-modal conversion regime to spawn transferable adversarial examples without referring to the substitute model. Three contributions are mainly involved: i) we figure out an integrated framework to produce transferable adversarial examples through resorting to three components, i.e., image-to-graph conversion, perturbation on converted graph and graph-to-image inversion; ii) upon the conversion from image to graph, we pinpoint critical graph characteristics to implement perturbation using gradient-oriented and optimization-oriented adversarial attacks, then, invert the perturbation on graph into the pixel disturbance correspondingly; iii) multi-facet experiments verify the reasonability and effectiveness with the comparison to three baseline methods. Our work has two novelties: first, without referring to the substitute model, our proposed scheme does not need to acquire any information about the victim model in advance; second, we explore the possibility that inferring the adversarial features of image data through drawing support from network/graph science. In addition, we present three key issues worth deeper discussion, along with these open issues, our work deserves more studies in future.
Loading