Tangent-Normal Adversarial Regularization for Semi-supervised LearningDownload PDF

27 Sept 2018 (modified: 22 Oct 2023)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
Abstract: The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications. In comparison to supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data. In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts. The two parts complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning. One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Both of the two regularizers are achieved by the strategy of virtual adversarial training. Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and practical datasets.
Keywords: semi-supervised learning, manifold regularization, adversarial training
TL;DR: We propose a novel manifold regularization strategy based on adversarial training, which can significantly improve the performance of semi-supervised learning.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1808.06088/code)
5 Replies

Loading