Cross-View Training for Semi-Supervised LearningDownload PDF

15 Feb 2018 (modified: 03 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a "teacher") to produce soft targets. The model then learns from these soft targets (acting as a ``"student"). We deviate from prior work by adding multiple auxiliary student prediction layers to the model. The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data. On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.
TL;DR: Self-training with different views of the input gives excellent results for semi-supervised image recognition, sequence tagging, and dependency parsing.
Keywords: semi-supervised learning, image recognition, sequence tagging, dependency parsing
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CoNLL](https://paperswithcode.com/dataset/conll-1), [CoNLL-2000](https://paperswithcode.com/dataset/conll-2000-1), [Penn Treebank](https://paperswithcode.com/dataset/penn-treebank), [SVHN](https://paperswithcode.com/dataset/svhn)
10 Replies

Loading