Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Adversarial Attacks, Heavy-ball Momentum, Clipped Normalized-gradient, Convergence
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: A new clipped heavy-ball momentum is proposed to theoretically and empirically circumvent the drawbacks of the sign-like regime in gradient-based adversarial attacks.
Abstract: Gradient-based adversarial attack is dominated by the sign-like regime. Specifically, the sign-momentum MI-FGSM, which is a variant of Polyak's heavy-ball in normalizing each gradient by its $L_1$-norm, has achieved remarkable empirical success. However, the sign operation inevitably loses information about the magnitude as well as the direction of gradient or momentum, leading to non-convergence even in simple convex cases. Gradient clipping is an effective rescaling technique in optimization, and its potential has recently been demonstrated in accelerating and stabilizing the training process for deep learning. In this paper, to circumvent the drawbacks of sign-like gradient-based attacks, we present a clipped momentum method, in which the normalized-gradient heavy-ball momentum (NGM) is clipped as the update direction. By using a new radius-varying clipping rule, the clipped NGM is proved to attain optimal averaging convergence for general constrained convex problems. The experiments demonstrate that it remarkably improves the performance of sign-like methods and verify that the clipping technique can serve as an alternative to the sign operation in adversarial attacks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 234
Loading