Catastrophic overfitting can be induced with discriminative non-robust features

Published: 22 Jul 2023, Last Modified: 22 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Authors that are also TMLR Expert Reviewers: ~Guillermo_Ortiz-Jimenez1
Abstract: Adversarial training (AT) is the de facto method for building robust neural networks, but it can be computationally expensive. To mitigate this, fast single-step attacks can be used, but this may lead to catastrophic overfitting (CO). This phenomenon appears when networks gain non-trivial robustness during the first stages of AT, but then reach a breaking point where they become vulnerable in just a few iterations. The mechanisms that lead to this failure mode are still poorly understood. In this work, we study the onset of CO in single-step AT methods through controlled modifications of typical datasets of natural images. In particular, we show that CO can be induced at much smaller $\epsilon$ values than it was observed before just by injecting images with seemingly innocuous features. These features aid non-robust classification but are not enough to achieve robustness on their own. Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: N/A
Code: https://github.com/gortizji/co_features
Assigned Action Editor: ~Jakub_Mikolaj_Tomczak1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1115
Loading