CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement

TMLR Paper1880 Authors

29 Nov 2023 (modified: 02 May 2024)Decision pending for TMLREveryoneRevisionsBibTeX
Abstract: Contrastive language image pretraining (CLIP) is a standard method for training vision-language models. While CLIP is scalable, promptable, and robust to distribution shifts on image classification tasks, it lacks object localization capabilities. This paper studies the following question: Can we augment CLIP training with task-specific vision models from model zoos to improve its visual representations? Towards this end, we leverage open-source task-specific vision models to generate pseudo-labels for an uncurated and noisy image-text dataset. Subsequently, we train CLIP models on these pseudo-labels in addition to the contrastive training on image and text pairs. This simple setup shows substantial improvements of up to 16.3% across different vision tasks, including segmentation, detection, depth estimation, and surface normal estimation. Importantly, these enhancements are achieved without compromising CLIP's existing capabilities, including its proficiency in promptable zero-shot classification.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We thank reviewers for their suggestions. We have made following main changes in the paper: * Included the fine-tuning details on CC3M in paragraph 1 of Section 4.2. * Updated Table 4 of the paper with 0-shot classification on 38 tasks that are used in OpenCLIP for evaluating CLIP models. * Added Table 7 and a paragraph in Section 5.4 that studies the role of joint multi-tasking with pseudo-labels and manual labels. * Appendix A has been updated to include how the value of $\lambda$ was determined. * Appendix B has been added to provide supplementary results concerning ADE20k segmentation, focusing on the role of experts.
Assigned Action Editor: ~Marcus_Rohrbach1
Submission Number: 1880
Loading