The Price of Implicit Bias in Adversarially Robust Generalization

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, Robust Generalization Gap, Implicit Bias, Optimisation, Generalization
TL;DR: We study the importance of optimization in robust ERM and its connection to the (adversarially robust) generalization of the model.
Abstract: We study the implicit bias of optimization in robust empirical risk minimization (robust ERM) and its connection with robust generalization. In classification settings under adversarial perturbations with linear models, we study what type of regularization should ideally be applied for a given perturbation set to improve (robust) generalization. We then show that the implicit bias of optimization in robust ERM can significantly affect the robustness of the model and identify two ways this can happen; either through the optimization algorithm or the architecture. We verify our predictions in simulations with synthetic data and experimentally study the importance of implicit bias in robust ERM with deep neural networks.
Primary Area: Learning theory
Submission Number: 18249
Loading