Spectral Contrastive Regression

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Metric Learning, Out-of-Distribution Generalization, In-Distribution Generalization, Regression
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: While several techniques have been proposed to enhance the generalization of deep learning models for classification problems, limited research has been con- ducted on improving generalization for regression tasks. This is primarily due to the continuous nature of regression labels, which makes it challenging to di- rectly apply classification-based techniques to regression tasks. Conversely, exist- ing regression methods overlook feature-level generalization and primarily focus on data augmentation using linear interpolation, which may not be an effective approach for synthesizing data for regression. In this paper, we introduce a novel generalization method for regression tasks based on the metric learning assump- tion that the distance between features and labels should be proportional. Unlike previous approaches that solely consider the scale prediction of this proportion and disregard its variation among samples, we argue that this proportion is not constant and can be defined as a mapping function. Additionally, we propose minimizing the error of this function and stabilizing its fluctuating behavior by smoothing out its variations. The t-SNE visualization of the embedding space demonstrates that our proposed loss function generates a more discriminative pattern with re- duced variance. To enhance Out-of-Distribution (OOD) generalization, we lever- age the characteristics of the spectral norm (i.e., the sub-multiplicativity of the spectral norm of the feature matrix can be expressed as Frobenius norm of the output), and align the maximum singular value of the feature matrices across dif- ferent domains. Experimental results on the MPI3D benchmark dataset reveal that aligning the spectral norms significantly improves the unstable performance on OOD data. We conduct experiments on eight benchmark datasets for domain generalization in regression, and our method consistently outperforms state-of- the-art approaches in the majority of cases. Our code is available in an anonymous repository, and it will be made publicly available upon acceptance of the paper: https://github.com/workerasd/SCR
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3117
Loading