Keywords: Contrastive Learning, Representation Learning, Self-Supervised Learning
Abstract: Typical contrastive self-supervised learning methods apply inter-image contrast to post-projector embeddings, thereby indirectly encouraging the pre-projector representations' invariance to several augmentation operators.
While effective, these methods do not account for the inherent difference between semantics-altering (such as cropping and cutout) and semantics-preserving augmentation operators (such as resizing, flipping and color distortion), and thereby lack an explicit mechanism to encourage distinguishable representations for semantically different contents within the same image.
We explain, both in reason and in practice, that these issues can harm the generalizability of the representations in downstream tasks.
To address these issues, we propose Direct Intra-image Contrastive Regularization (DICR), a plug-and-play regularization method that directly applies intra-image contrast to pre-projector representations.
Empirical results show that DICR can significantly enhance the generalizability of existing methods in downstream tasks, and validate the crucial role of semantic content distinguishability in the generalizable performance of contrastive learning.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2037
Loading