Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
TL;DR: We reduce the error rate from 5.12% to 4.18% on SVHN with 500 labels, and achieve 4.35% error rate on SVHN with 250 labels.
Abstract: The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Mean Teacher achieves error rate 4.35% on SVHN with 250 labels, better than Temporal Ensembling does with 1000 labels.
Keywords: Computer vision, Deep learning, Semi-Supervised Learning
Conflicts: cai.fi, aalto.fi