Reproducibilty Study of Boosting Adversarial Transferability via Gradient Relevance Attack

TMLR Paper4328 Authors

22 Feb 2025 (modified: 09 May 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper presents a reproducibility study of Boosting Adversarial Transferability via Gra- dient Relevance Attack by Zhu et al., which introduces the Gradient Relevance Attack (GRA) method. GRA enhances the transferability of adversarial examples across different machine learning models, improving black-box adversarial attacks. The key experiments were successfully replicated, focusing on the gradient relevance framework and the decay in- dicator. The methodology involved reimplementing the GRA algorithm and evaluating it on the same set of models used in the original paper. The results show that the achieved attack success rates were within a 1% margin of those reported in the original study, confirming the effectiveness of the GRA method. Additionally, this work extends the original study by introducing a dynamic learning rate (α) that adjusts the step size based on the cosine sim- ilarity between the current momentum and the average gradient. The findings suggest that this adaptive step size mechanism can lead to faster convergence and potentially improved attack performance in certain scenarios.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yunwen_Lei1
Submission Number: 4328
Loading