Do not trust what you trust: Miscalibration in Semisupervised Learning

Published: 30 Sept 2024, Last Modified: 30 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: State-of-the-art semi-supervised learning (SSL) approaches rely on highly confident predictions to serve as pseudo-labels that guide the training on unlabeled samples. An inherent drawback of this strategy stems from the quality of the uncertainty estimates, as pseudo-labels are filtered only based on their degree of uncertainty, regardless of the correctness of their predictions. Thus, assessing and enhancing the uncertainty of network predictions is of paramount importance in the pseudo-labeling process. In this work, we empirically demonstrate that SSL methods based on pseudo-labels are significantly miscalibrated, and formally demonstrate the minimization of the min-entropy, a lower bound of the Shannon entropy, as a potential cause for miscalibration. To alleviate this issue, we integrate a simple penalty term, which enforces the logit distances of the predictions on unlabeled samples to remain low, preventing the network predictions to become overconfident. Comprehensive experiments on a variety of SSL image classification benchmarks demonstrate that the proposed solution systematically improves the calibration performance of relevant SSL models, while also enhancing their discriminative power, being an appealing addition to tackle SSL tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: \cite changed to \citep at all places. Integrated the requested changes.
Video: https://youtu.be/Gj3-NgXo9Wk
Code: https://github.com/ShambhaviCodes/miscalibration-ssl
Assigned Action Editor: ~Colin_Raffel1
Submission Number: 2735
Loading