On Non-Random Missing Labels in Semi-Supervised LearningDownload PDF

29 Sept 2021, 00:34 (edited 13 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Semi-Supervised Learning, Missing Not At Random, Image Classification
  • Abstract: Semi-Supervised Learning (SSL) is fundamentally a missing label problem, in which the label Missing Not At Random (MNAR) problem is more realistic and challenging, compared to the widely-adopted yet naive Missing Completely At Random assumption where both labeled and unlabeled data share the same class distribution. Different from existing SSL solutions that overlook the role of ''class'' in causing the non-randomness, e.g., users are more likely to label popular classes, we explicitly incorporate ''class'' into SSL. Our method is three-fold: 1) We propose Class-Aware Propensity (CAP) that exploits the unlabeled data to train an improved classifier using the biased labeled data. 2) To encourage rare class training, whose model is low-recall but high-precision that discards too many pseudo-labeled data, we propose Class-Aware Imputation (CAI) that dynamically decreases (or increases) the pseudo-label assignment threshold for rare (or frequent) classes. 3) Overall, we integrate CAP and CAI into a Class-Aware Doubly Robust (CADR) estimator for training an unbiased SSL model. Under various MNAR settings and ablations, our method not only significantly outperforms existing baselines, but also surpasses other label bias removal SSL methods.
  • One-sentence Summary: We presented a principled class-aware doubly robust solution to handle the non-random missing labels in semi-supervised learning.
  • Supplementary Material: zip
8 Replies

Loading