SWRM: Similarity Window Reweighting and Margins for Long-Tailed RecognitionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Long-tailed recognition, class re-balancing, reweighting, logit adjustment
Abstract: Real-world data usually obeys a long-tailed distribution. Many previous works merely focus on the superficial phenomenon that tail classes lack samples in long-tailed datasets, yet they do not conduct in-depth analysis on the datasets and the model prediction results. In this paper, we experimentally find that due to the easily confusing visual features between head- and tail classes, the cross-entropy model is prone to misclassify tailed samples as head classes with high appearance similarity. We propose a Similarity Window Reweighting and Margins (SWRM) algorithm to tackle this problem. Specifically, we pretrain a cross-entropy model to model category similarity, then a sliding window is adopted upon the modeling result to constrain the impact of similarity. We design weights for different classes with the help of similarity window, which is named Similarity Window Reweighting (SWR). Besides, different margins computed inside the similarity window will be assigned to different classes, this is called Similarity Window Margin (SWM). In a nutshell, SWR considers the category frequency difference and the category similarity impact simultaneously, so that the weight coefficients computed by SWR are more reasonable. SWM prompts the model to learn fine-grained features and is conducive to the model's discriminative ability. Therefore, our methods alleviate the issue of misclassification effectively. In order to enhance the robustness and generalization of the model, we introduce a learnable similarity vector and further propose a Dynamic Similarity Window Reweighting and Margins (DySWRM) algorithm, which spends less computation cost compared with SWRM. Extensive experiments verify our proposed approaches effectiveness and superiority over SOTA reweighting and logit adjustment methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
5 Replies

Loading