Unleashing the Potential of Classification with Semantic Similarity for Deep Imbalanced Regression

ICLR 2025 Conference Submission1944 Authors

19 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep imbalanced regression
Abstract: Recent studies have empirically demonstrated the feasibility of incorporating classification regularizers into Deep Imbalanced Regression (DIR). By segmenting the entire dataset into distinct groups and performing classification regularization on these groups, previous works primarily focused on capturing ordinal characteristic of the DIR in the feature space. Consequently, this direct integration would lead the model to focus merely on learning discriminative features and treating the DIR as a classification task but lacks of an end-to-end solution. As a result, data similarity, another aspect of the continuity of data as the label similarity across the data in DIR also implies feature similarity of the data has always been ignored. Therefore, the effectiveness of these classification-based approaches are significantly limited in DIR. To tackle this problem, we investigate the similarity characteristics of the data in DIR to unleash the potential of classification in helping DIR. Specifically, we first split the imbalance of the datasets into a global level cross-group imbalance and instance-level in-group imbalance. Then, to fully exploit the potential of classification under the DIR task, we propose an asymmetric soft labeling strategy to capture the global data similarity to handle the cross-group imbalance. In the meantime, we introduce the instance label distribution smoothing to address the intra-group imbalance with a multi-heads regressor. More importantly, we associatedly link up the group classification to guide the learning of the multi-heads regressor, which can further harness the classification to solve the DIR from end-to-end. Extensive experiments in the real-world datasets also validates the effectiveness of our proposed method.
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1944
Loading