On the Clean Generalization and Robust Overfitting in Adversarial Training from Two Theoretical Views: Representation Complexity and Training Dynamics
TL;DR: We provide a theoretical understanding why clean generalization and robust overfitting both happen in adversarial training.
Abstract: Similar to surprising performance in the standard deep learning, deep nets trained by adversarial training also generalize well for unseen clean data (natural data). However, despite adversarial training can achieve low robust training error, there exists a significant robust generalization gap. We call this phenomenon the Clean Generalization and Robust Overfitting (CGRO). In this work, we study the CGRO phenomenon in adversarial training from two views: representation complexity and training dynamics. Specifically, we consider a binary classification setting with $N$ separated training data points. First, we prove that, based on the assumption that we assume there is $\operatorname{poly}(D)$-size clean classifier (where $D$ is the data dimension), ReLU net with only $O(N D)$ extra parameters is able to leverages robust memorization to achieve the CGRO, while robust classifier still requires exponential representation complexity in worst case. Next, we focus on a structured-data case to analyze training dynamics, where we train a two-layer convolutional network with $O(N D)$ width against adversarial perturbation. We then show that a three-stage phase transition occurs during learning process and the network provably converges to robust memorization regime, which thereby results in the CGRO. Besides, we also empirically verify our theoretical analysis by experiments in real-image recognition datasets.
Lay Summary: Adversarial training, similar to standard deep learning, enables deep nets to generalize well to unseen clean data. However, even though adversarial training can reduce training errors, a significant gap in robust generalization remains. We call this the Clean Generalization and Robust Overfitting (CGRO) phenomenon. In this study, we explore CGRO from two perspectives: model complexity and training dynamics. We show that a simple neural network can achieve CGRO through robust memorization, while a fully robust classifier requires much more complex representations. We also analyze the training process of a convolutional network and identify a three-stage phase transition during learning, which leads to robust memorization and explains the CGRO effect. Our theoretical analysis is supported by experiments on real-world image recognition datasets.
Primary Area: Deep Learning->Theory
Keywords: deep learning theory, adversarial training, clean generalization and robust overfitting, representation complexity, training dynamics, feature learning theory
Submission Number: 7158
Loading