Keywords: Explainability, Attribution, Dual Encoder, Similarity, Vision-Language, CLIP
TL;DR: We derive a method to attribute similarity predictions of dual encoders onto feature-pair interactions of inputs and can show detailed correspondence between caption parts and image regions in CLIP models.
Abstract: Dual encoder architectures like CLIP models map two types of inputs into a shared embedding space and learn similarities between them.
However, it is not understood how such models compare the two inputs.
We first derive a method to attribute predictions of any differentiable dual encoder onto feature-pair interactions between its inputs.
Second, we apply our method to CLIP models and show that they learn fine-grained correspondences between parts of captions and regions in images. They match objects across input modes and also account for mismatches. However, this visual-linguistic grounding ability heavily varies between object classes, depends on the training data distribution, and largely improves upon in-domain training.
Using our method we can identify individual failure cases and knowledge gaps about specific object classes.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10386
Loading