Variational Adapter for Cross-modal Similarity Representation

06 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Variational Adapter, cross-modal similarity representation, cross-modal retrieval, domain generalization
Abstract: The core of vision-language models lies in measuring cross-modal similarity within a unified representation space. However, most image-text matching or multi-class image classification datasets lack fine-grained cross-modal matching annotations, forcing the continuous similarity space into binary classification boundaries. This compression induces false negative samples and significantly impairs the generalization performance of cross-modal tasks. While prior research has attempted to mitigate this by modeling intra-modal ambiguity, it often overlooks inherent annotation flaws, leading to suboptimal uncertainty allocation. To address these challenges, we propose a Variational Adapter for Cross-modal Similarity Representation (VACSR). This approach reformulates image-text matching with fine-grained semantic scarcity as a variational inference problem. It constructs a latent space for cross-modal similarity and uses regularization techniques to mitigate overfitting to binary annotations. Additionally, we introduce a distributional optimization loss to eliminate erroneous gradients caused by false negative samples. \textcolor{blue}{We validate the effectiveness of VACSR in image-text retrieval tasks using the COCO Caption dataset and two extended datasets: CxC and ECCV Caption. Furthermore, we conduct comprehensive out-of-distribution evaluations including domain generalization on ImageNet and its variants, as well as base-to-novel generalization across 11 datasets, highlighting VACSR’s robust generalization performance in a wide range of real-world situations.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 2572
Loading