Theoretical Analysis of Self-Training with Deep Networks on Unlabeled DataDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 OralReaders: Everyone
Keywords: deep learning theory, domain adaptation theory, unsupervised learning theory, semi-supervised learning theory
Abstract: Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of self-training only applies to linear models. This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning. At the core of our analysis is a simple but realistic “expansion” assumption, which states that a low-probability subset of the data must expand to a neighborhood with large probability relative to the subset. We also assume that neighborhoods of examples in different classes have minimal overlap. We prove that under these assumptions, the minimizers of population objectives based on self-training and input-consistency regularization will achieve high accuracy with respect to ground-truth labels. By using off-the-shelf generalization bounds, we immediately convert this result to sample complexity guarantees for neural nets that are polynomial in the margin and Lipschitzness. Our results help explain the empirical successes of recently proposed self-training algorithms which use input consistency regularization.
One-sentence Summary: This paper provides accuracy guarantees for self-training with deep networks on polynomial unlabeled samples for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
13 Replies

Loading