Continual Learners are Viable Long-Tailed Recognizers

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Continual Learning, Long-Tailed Recognition, Imbalanced Learning
Abstract: We propose a series of theorems which demonstrate that using Continual Learning (CL) to sequentially learn the majority and minority class subsets in a highly imbalanced dataset, is an effective solution for Long-Tailed Recognition (LTR). First, we theoretically prove that under the assumption of strong convexity of the loss function, the weights of a learner trained on a long-tailed dataset are bounded to reside within a neighborhood of the weights of the same learner trained strictly on the largest subset of that dataset. As a result, we present a novel perspective that CL methods, which are designed to optimize the weights in a way that the model performs well on multiple sets, are viable solutions for LTR. To validate our proposed perspective, we first verify the predicted upper bound of the neighborhood radius using the MNIST-LT toy dataset. Next, we evaluate the efficacy of several CL strategies on multiple standard LTR benchmarks (CIFAR100-LT, CIFAR10-LT, and ImageNet-LT), and show that standard CL methods achieve strong performance gains compared to baseline models and tailor-made approaches for LTR. Finally, we assess the applicability of CL techniques on real-world data by exploring CL on the naturally imbalanced Caltech256 dataset and demonstrate its superiority over state-of-the-art models. Our work not only unifies LTR and CL but also paves the way for leveraging advances in CL methods to tackle the LTR challenge more effectively.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 346
Loading