Weak Adversarial BoostingDownload PDF

24 Apr 2024 (modified: 17 Feb 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: The "adversarial training" methods have recently been emerging as a promising avenue of research. Broadly speaking these methods achieve efficient training as well as boosted performance via an adversarial choice of data, features, or models. However, since the inception of the Generative Adversarial Nets (GAN), much of the attention is focussed on adversarial "models", i.e., machines learning by pursuing competing goals. In this note we investigate the effectiveness of several (weak) sources of adversarial "data" and "features". In particular we demonstrate: (a) low precision classifiers can be used as a source of adversarial data-sample closer to the decision boundary (b) training on these adversarial data-sample can give significant boost to the precision and recall compared to the non-adversarial sample. We also document the use of these methods for improving the performance of classifiers when only limited (and sometimes no) labeled data is available.
TL;DR: Training on adversarial data generated via low-precision classifiers can boost performance with only small labeled data.
Keywords: Semi-Supervised Learning
Conflicts: -
3 Replies

Loading