Keywords: deep long-tailed learning, Debiasing Learning, Energy-Based Learning
Abstract: In real-world applications, ensuring that model decisions are independent of the training data distribution is crucial for safely deploying models. To address the long-tailed problem, massive approaches focus either on improving individual prediction quality or enhancing aggregate evaluation. Although these methods improve overall performance, they often sacrifice performance in some classes, undermining the goals of long-tailed learning. We conduct a mathematical analysis of the limitations of the Empirical Risk Minimization (ERM) framework in long-tailed learning, examining both individual performance and aggregate evaluation. For individual evaluation, although the Negative log-likelihood (NLL) metric is effective, it relies heavily on softmax leading to poor distinction and ambiguity when the probabilities of correct and incorrect predictions are similar. For aggregate evaluation, the naive estimator in ERM is not an unbiased estimator, dominated by head classes. To overcome these challenges, we propose Re-Debias, a comprehensive framework combining the Residual-Energy score and a Debias estimator. The Residual-Energy score provides a more sensitive reflection of prediction quality than softmax-based scores, enhancing prediction precision and reducing ambiguity. The Debias estimator applies causal inference techniques to ensure unbiased estimates during the averaging process, correcting for class-wise biases inherent in the naive estimator. Through extensive validation on long-tailed benchmarks, including training from scratch on iNaturalist18, ImageNet-LT, and CIFAR10/100-LT, as well as fine-tuning Vision Transformer (ViT) on iNaturalist18, our method outperforms the state-of-the-art algorithms. Our code and trained models will be made available following the publication of this paper.
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9782
Loading