everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
The transferability of adversarial attack to deep neural networks (DNNs) accounts for the possibility that the adversarial examples crafted for a known model can also mislead other unseen models in black-box setting. Existing literature to improve the adversarial transferability often focus on spreading the adversarial perturbations towards the whole image, which can be counter-productive as the extended perturbation can hardly track the attention regions across different models. That's because although they spread the perturbation throughout the entire image but they do not consider the mutual influence of different perturbation regions. In this paper, we propose a simple yet effective perturbation-dropping scheme that can enhance the transferability of the adversarial examples by incorporating the dropout mechanism during their optimization process. Specifically, we leverage the class activation map (CAM) to locate the midpoint of the dropped regions, whereby the effective perturbation can be generated for the target models while maintaining the attack rate towards the source model even if some blocks of the perturbation noises are dropped. Extensive experiments are conducted on the ImageNet dataset, which demonstrates that the proposed method outperforms state-of-the-art methods, that achieve both high attack efficiency and transferability.