LCGC: Learning from Consistency Gradient Conflicting for Class-Imbalanced Semi-Supervised Debiasing

Published: 2025, Last Modified: 05 Nov 2025AAAI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Classifiers often learn to be biased corresponding to the class-imbalanced dataset under the semi-supervised learning (SSL) set. While previous work tries to appropriately re-balance the classifiers by subtracting a class-irrelevant image's logit, we further utilize a cheaper form of consistency gradients, which can be widely applicable to various class-imbalanced SSL (CISSL) models. We theoretically analyze that the process of refining pseudo-labels with a baseline image (solid color image without any patterns) in the basic SSL algorithm implicitly utilizes integrated gradient flow training, which can improve the attribution ability. Based on the analysis, we propose a consistently conflicting gradient-based debiasing scheme dubbed LCGC, by encouraging biased class predictions during training. We intentionally update the pseudo-labels whose gradient conflicts with the debiased logits, which is represented as the optimization direction offered by the over-imbalanced classifier predictions. Then, we debias the predictions by subtraction the baseline image logits during testing. Extensive experiments demonstrate that our method can significantly improve the prediction accuracy of existing CISSL models on public benchmarks.
Loading