DBA: An Efficient Approach to Boost Transfer-Based Adversarial Attack Performance Through Information Deletion
Abstract: In practice, deep learning models are easy to be fooled by input images with subtle perturbations, and those images are called adversarial examples. Regarding one model, the crafted adversarial examples can successfully fool other models with varying architectures but the same task, which is referred to as adversarial transferability. Nevertheless, in practice, it is hard to get information about the model to be attacked, transfer-based adversarial attacks have developed rapidly. Later, different techniques are proposed to promote adversarial transferability. Different from existing input transformation attacks based on spatial transformation, our approach is a novel one on the basis of information deletion. By deleting squares of the input images by channels, we mitigate overfitting on the surrogate model of the adversarial examples and further enhance adversarial transferability. The corresponding performance of our method is superior to the existing input transformation attacks on different models (here, we consider unsecured models and defense ones), as demonstrated by extensive evaluations on ImageNet.
0 Replies
Loading