Keywords: semi-supervised learning, semisupervised learning, semi-supervised, semisupervised, evaluation
TL;DR: We test SoTA semi-supervised learning (SSL) algorithms against a few baselines and across different datasets. We find that non-SSL approaches are competitive, and that adding unlabeled data can hurt.
Abstract: Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. Approaches based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that simple baselines which do not use unlabeled data can be competitive with the state-of-the-art, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples.