Class-Adaptive Rectification with Experts for Robust Long-Tailed Noisy Label Learning

ICLR 2026 Conference Submission15295 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: long-tail learning, noisy label learning
TL;DR: Long-Tailed Noisy Label Learning
Abstract: Real-world datasets frequently exhibit long-tailed class distributions alongside noisy labels, posing compounded challenges for robust learning. While recent methods have made progress, they often neglect the uneven impact of label noise across classes, resulting in insufficient correction for tail classes. This imbalance further introduces erroneous over-regularization on other classes, ultimately undermining long-tailed learning. To address these challenges, we propose Class-Adaptive Rectification with Experts (CARE), a parameter-efficient framework built upon vision–language models, which performs class-aware label correction by jointly leveraging three complementary sources of supervision: noisy observed labels, text embeddings, and image features. CARE further employs a class-adaptive Top-$K$ expert consensus mechanism, which assigns smaller $K$ to tail classes in order to extract reliable candidate labels and recalibrate class frequencies. This refinement yields faithful class-frequency estimation, thereby enabling more reliable long-tailed calibration. We evaluate CARE on CIFAR-100-LTN, mini-ImageNet-LTN, and real-world datasets, including Food101N and WebVision-50. Across all benchmarks, CARE consistently surpasses recent state-of-the-art methods, achieving up to 3.0\% accuracy improvements in certain settings. The source code is temporarily available at https://anonymous.4open.science/r/CARE-9F10.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 15295
Loading