IKL: Boosting Long-Tail Recognition with Implicit Knowledge Learning

24 Sept 2023 (modified: 20 Feb 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Long-tail Recognition
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: In the field of visual long-tailed recognition, the long-tailed distribution of image representations often raises two key challenges: (1) the training process shows great uncertainty (e.g., uncertainty in the prediction of augmented views by the same expert for the same sample) and (2) a marked bias in the model's prediction towards the head class. To tackle the above issue, we propose a novel method termed Implicit Knowledge Learning (IKL) to extract the knowledge hidden in long-tail learning processes, aiming to significantly improve performance in long-tail recognition. Our IKL contains two core components: Implicit Uncertainty Regularization (IUR) and Implicit Correlation Labeling (ICL). The former method, IUR, exploits the uncertainty of the predictions over adjacent epochs. Then, it transfers the correct knowledge to reduce uncertainty and improve long-tail recognition accuracy. The latter approach, ICL, endeavors to reduce the bias introduced by one-hot labels by exploring the implicit knowledge in the model: inter-class similarity information. Our approach is lightweight enough to plug and play with existing long-tail learning methods, achieving state-of-the-art performance in popular long-tail benchmarks. The experimental results highlight the great potential of implicit knowledge learning in dealing with long-tail recognition. Our code will be open-sourced upon acceptance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9210
Loading