Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Cross-View Training for Semi-Supervised Learning
Nov 03, 2017 (modified: Nov 03, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a "``teacher") to produce soft targets. The model then learns from these soft targets (acting as a ``"student"). We deviate from prior work by adding multiple auxiliary student softmax layers to the model. The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. We propose variants of our method for CNN image classifiers and BiLSTM sequence taggers. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train semi-supervised sequence taggers on four natural language processing tasks using hundreds of millions of sentences of unlabeled data. The resulting models improve upon or are competitive with the current state-of-the-art on every task.
TL;DR:Self-training with different views of the input gives excellent results for semi-supervised image recognition and sequence tagging.