CLIP-Guided Generative Networks for Transferable Targeted Adversarial Attacks

Published: 01 Jan 2024, Last Modified: 15 May 2025ECCV (28) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Transferable targeted adversarial attacks aim to mislead models into outputting adversary-specified predictions in black-box scenarios. Recent studies have introduced single-target attacks that train a generator for each target class to generate highly transferable perturbations, resulting in substantial computational overhead when handling multiple classes. Multi-target attacks address this by training only one class-conditional generator for multiple classes. However, the generator simply uses class labels as conditions, failing to leverage the rich semantic information of the target class. To this end, we design a CLIP-guided Generative Network with Cross-attention modules (CGNC) to enhance multi-target attacks by incorporating textual knowledge of CLIP into the generator. Extensive experiments demonstrate that CGNC yields significant improvements over previous multi-target attacks, e.g., a 21.46% improvement in success rate from Res-152 to DenseNet-121. Moreover, we propose the masked fine-tuning to further strengthen our method in attacking a single class, which surpasses existing single-target methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview