Stochastic Frank Wolfe for Constrained Nonconvex Optimization

TMLR Paper4196 Authors

13 Feb 2025 (modified: 14 Apr 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We provide a practical convergence analyses of Stochastic Frank Wolfe (SFW) and SFW with momentum with constant and decaying learning rates for constrained nonconvex optimization problems. We show that a convergence measure called the Frank Wolfe gap converges to zero only when we decrease the learning rate and increase the batch size. We apply SFW algorithms to adversarial attacks and propose a new adversarial attack method, Auto-SFW. Finally, we compare existing methods with the SFW algorithms in attacks against the latest robust models.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Eduard_Gorbunov1
Submission Number: 4196
Loading