Abstract: We address the challenging problem of Long-Tailed SemiSupervised Learning (LTSSL) where labeled data exhibit imbalanced class distribution and unlabeled data follow an unknown distribution. Unlike in balanced SSL, the generated
pseudo-labels are skewed towards head classes, intensifying the training bias. Such a phenomenon is even amplifed
as more unlabeled data will be mislabeled as head classes
when the class distribution of labeled and unlabeled datasets
are mismatched. To solve this problem, we propose a novel
method named ComPlementary Experts (CPE). Specifcally,
we train multiple experts to model various class distributions,
each of them yielding high-quality pseudo-labels within one
form of class distribution. Besides, we introduce Classwise
Batch Normalization for CPE to avoid performance degradation caused by feature distribution mismatch between head
and non-head classes. CPE achieves state-of-the-art performances on CIFAR-10-LT, CIFAR-100-LT, and STL-10-LT
dataset benchmarks. For instance, on CIFAR-10-LT, CPE improves test accuracy by over >2.22% compared to baselines.
Code is available at https://github.com/machengcheng2016/
CPE-LTSSL.
Loading