Impact of Language Guidance: A Reproducibility Study

TMLR Paper4327 Authors

22 Feb 2025 (modified: 02 Jun 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Modern deep-learning architectures need large amounts of data to produce state-of-the-art results. Annotating such huge datasets is time-consuming, expensive, and prone to human error. Recent advances in self-supervised learning allow us to train huge models without explicit annotation. Contrastive learning is a popular paradigm in self-supervised learning. Recent works like SimCLR and CLIP rely on image augmentations or directly minimizing cross-modal loss between image and text. Banani et al. (2023) propose to use language guidance to sample view pairs. They claim that language enables better conceptual similarity, eliminating the effects of visual variability. We reproduce their experiments to verify their claims. We find that their dataset, RedCaps, contains low-quality captions. We use an off-the-shelf image captioning model, BLIP-2, to replace the captions and improve performance. We also devise a new metric to evaluate the semantic capabilities of self-supervised models based on interpretability methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Chinmay_Hegde1
Submission Number: 4327
Loading