Understanding The Role of Adversarial Regularization in Supervised LearningDownload PDF

Anonymous

23 Oct 2020 (modified: 05 May 2023)Submitted to NeurIPS 2020 Deep Inverse WorkshopReaders: Everyone
TL;DR: Using theoretical justification, we intend to resolve the mystery of why supervised learning with adversarial regularization performs better than sole supervision.
Keywords: adversarial regularization, deep learning theory, optimization
Abstract: Despite numerous attempts sought to provide empirical evidence of adversarial regularization outperforming sole supervision in various inverse problems, the theoretical understanding of such phenomena remains elusive. In this study, we aim to resolve whether adversarial regularization indeed performs better than sole supervision at a fundamental level. To bring this insight into fruition, we study vanishing gradient issue, asymptotic iteration complexity, gradient flow and provable convergence guarantee in the context of sole supervision and adversarial regularization. The key ingredient is a theoretical justification supported by empirical evidence of adversarial acceleration in gradient descent. In addition, motivated by a recently introduced unit-wise capacity based generalization bound, we analyze the generalization error in adversarial framework. Guided by our observation, we cast doubts on the ability of this measure to explain generalization. We therefore leave as open questions to explore new measures that can explain generalization behavior in adversarial learning.
0 Replies

Loading