Distilling Knowledge from Text-to-Image Generative Models Improves Visio-Linguistic Reasoning in CLIP
Abstract: Image-text contrastive models like CLIP have wide applications in zero-shot classification, image-text retrieval, and transfer learning. However, they often struggle on compositional visio-linguistic tasks (e.g., attribute-binding or object-relationships) where their performance is no better than random chance. To address this, we introduce SDS-CLIP, a lightweight and sample-efficient distillation method to enhance CLIP's compositional visio-linguistic reasoning. Our approach fine-tunes CLIP using a distillation objective borrowed from large text-to-image generative models like Stable-Diffusion, which are known for their strong visio-linguistic reasoning abilities. On the challenging Winoground benchmark, SDS-CLIP improves the visio-linguistic performance of various CLIP models by up to 7%, while on the ARO dataset, it boosts performance by up to 3%. This work underscores the potential of well-designed distillation objectives from generative models to enhance contrastive image-text models with improved visio-linguistic reasoning capabilities.
Paper Type: short
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Contribution Types: Model analysis & interpretability
Languages Studied: English
Preprint Status: There is a non-anonymous preprint (URL specified in the next question).
A1: yes
A1 Elaboration For Yes Or No: 7
A2: yes
A2 Elaboration For Yes Or No: 8
A3: yes
A3 Elaboration For Yes Or No: Abstract and Section 1
B: yes
B1: yes
B1 Elaboration For Yes Or No: 5
B2: n/a
B3: n/a
B4: n/a
B5: n/a
B6: yes
B6 Elaboration For Yes Or No: A
C: yes
C1: yes
C2: yes
C2 Elaboration For Yes Or No: This information is scattered throughout the paper
C3: n/a
C4: n/a
D: no
D1: n/a
D2: n/a
D3: n/a
D4: n/a
D5: n/a
E: yes
E1: yes
E1 Elaboration For Yes Or No: Appendix
0 Replies
Loading