Optimization for Robustness Evaluation beyond ℓp MetricsDownload PDF

Published: 23 Nov 2022, Last Modified: 25 Nov 2024OPT 2022 PosterReaders: Everyone
Keywords: deep neural networks, adversarial robustness, adversarial attack, adversarial evaluation, constrained optimization
TL;DR: 6 pages of main text + 3 pages of reference
Abstract: Empirical evaluations of neural network models against adversarial attacks entail solving nontrivial constrained optimization problems. Popular algorithms for solving these constrained problems rely on projected gradient descent (PGD) and require careful tuning of multiple hyperparameters. Moreover, PGD can only handle $\ell_1$, $\ell_2$, and $\ell_\infty$ attacks due to the use of analytical projectors. In this paper, we introduce an alternative algorithmic framework that blends a general-purpose constrained-optimization solver \pygranso, \textbf{W}ith \textbf{C}onstraint-\textbf{F}olding (PWCF), to add reliability and generality to the existing adversarial evaluations. PWCF 1) finds good-quality solutions without delicate tuning of multiple hyperparameters; 2) can handle general attack models which are inaccessible to the existing algorithms, e.g., $\ell_{p > 0}$, and perceptual attacks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/optimization-for-robustness-evaluation-beyond/code)
0 Replies

Loading