CLESP: Collaborative Learning with Ensemble Soft Pseudo-Labels

ICLR 2026 Conference Submission20034 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Ensemble, Domain Adaptation, Semi-supervised Learning, psuedo-labeling, self-training
TL;DR: We propose an unsupervised learning algorithm for self-training a set of pre-trained image classifiers on unlabeled data.
Abstract: In this work, we present Collaborative Learning with Ensemble Soft Pseudo-Labels (CLESP), a method for updating of a set of pre-trained classifiers during testing phase on unlabeled data. CLESP improves the classification performance of ensembles by allowing member classifiers to learn from each other and improve during inference. Specifically, we minimize the cross-entropy between the classifier soft output that has the highest predicted probability for the majority-voted class (a high confidence/entropy softmax) and all other classifiers. The majority-voted model that all others learn from may change from sample to sample, allowing the group to dynamically learn from each other without labels. Our method uniquely optimizes all trainable parameters in each model and applies to both single sample setting and batch setting. In our experiments, using sets of independently pre-trained base classifiers with distinct architectures, we find that CLESP can significantly reduce generalization errors of ensemble models on classification tasks such as the CIFAR-10, CIFAR-100, and ImageNet datasets and their corrupted counterparts, while also minimizing the entropy of classifier soft outputs.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 20034
Loading