SCAD: Super-Class-Aware Debiasing for Long-Tailed Semi-Supervised Learning

ICLR 2026 Conference Submission10525 Authors

Published: 26 Jan 2026, Last Modified: 26 Jan 2026ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: semi-supervised learning
Abstract: In long-tailed semi-supervised learning (LTSSL), pseudolabeling often creates a vicious cycle of bias amplification, a problem that recent state-of-the-art methods attempt to mitigate using logit adjustment (LA). However, their adjustment schemes, inherited from LA, remain inherently hierarchyagnostic, failing to account for the semantic relationships between classes. In this regard, we identify a critical yet overlooked problem of intra-super-class imbalance, where a toxic combination of high semantic similarity and severe local imbalance within each super-class hinders effective LTSSL. This problem causes the model to reinforce on its errors, leading to representation overshadowing. To break this cycle, we propose Super-Class-Aware Debiasing (SCAD), a new framework that performs a dynamic, super-class-aware logit adjustment. SCAD leverages the latent semantic structure between classes to focus its corrective power on the most confusable groups, effectively resolving the local imbalances. Our extensive experiments validate that SCAD achieves new state-of-the-art performance, demonstrating the necessity of a super-class-aware approach for robust debiasing.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 10525
Loading