Supervised Dimension Contrastive Learning

18 Sept 2025 (modified: 26 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Supervised Representation Learning, Dimension Contrastive Learning
Abstract: Representation learning is a fundamental task in machine learning, with the learned representations often serving as the backbone for downstream applications. While self-supervised learning has demonstrated strong generalization by maximizing representation diversity, it lacks explicit semantic structure. In contrast, supervised contrastive learning improves in-domain performance by clustering same-class representations but often limits diversity, reducing out-domain generalization. To address this, we redefine supervised representation learning from a mutual information perspective, highlighting the need to balance representation diversity and class relevance. We propose Supervised Dimension Contrastive Learning (SupDCL), a comprehensive framework that optimizes this balance through three key components: (1) decorrelation loss to enhance representation diversity, (2) orthogonal loss to remove redundant information, and (3) class correlation loss to strengthen class alignment. SupDCL achieves state-of-the-art generalization across ImageNet-1K and 10 downstream tasks, bridging the gap between self-supervised and supervised learning. By optimizing mutual information, it provides a principled approach to supervised representation learning, ensuring representations that are both robust and transferable.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 11724
Loading