Maximizing the robust margin provably overfits on noiseless dataDownload PDF

Published: 21 Jun 2021, Last Modified: 05 May 2023ICML 2021 Workshop AML PosterReaders: Everyone
Keywords: robust overfitting, adversarial robustness, adversarial training, theory
TL;DR: We show that the robust max-margin solution overfits even on noiseless data yielding a worse performance than ridge regularized and early stopped robust logistic regression.
Abstract: Numerous recent works show that overparameterization implicitly reduces variance, suggesting vanishing benefits for explicit regularization in high dimensions. However, this narrative has been challenged by empirical observations indicating that adversarially trained deep neural networks suffer from robust overfitting. While existing explanations attribute this phenomenon to noise or problematic samples in the training data set, we prove that even on entirely noiseless data, achieving a vanishing adversarial logistic training loss is suboptimal compared to regularized counterparts.
2 Replies

Loading