Prompt Tuning Vision Language Models with Margin Regularizer for Few-Shot Learning under Distribution Shifts

Published: 02 Jan 2025, Last Modified: 02 Jan 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, Vision-Language foundation models like CLIP and ALIGN, which are pre-trained on large-scale data have shown remarkable zero-shot generalization to diverse datasets with different classes and even domains. In this work, we take a step further and analyze whether these models can be adapted to target datasets having very different distributions and classes compared to what these models have been trained on, using only a few labeled examples from the target dataset. In such scenarios, finetuning large pretrained models is challenging due to problems of overfitting as well as loss of generalization, and has not been well explored in prior literature. Since, the pre-training data of such models are unavailable, it is difficult to comprehend the performance on various downstream datasets. First, we try to answer the question: Given a target dataset with a few labelled examples, can we estimate whether further fine-tuning can enhance the performance compared to zero-shot evaluation? by analyzing the common vision-language embedding space. Based on the analysis, we propose a novel prompt-tuning method, PromptMargin for adapting such large-scale VLMs directly on the few target samples. PromptMargin effectively tunes the text as well as visual prompts for this task, and has two main modules: 1) Firstly, we use a selective augmentation strategy to complement the few training samples in each task; 2) Additionally, to ensure robust training in the presence of unfamiliar class names, we increase the inter-class margin for improved class discrimination using a novel Multimodal Margin Regularizer. Extensive experiments and analysis across fifteen target benchmark datasets, with varying degrees of distribution shifts from natural images, shows the effectiveness of the proposed framework over the existing state-of-the-art approaches applied to this setting.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We thank all the Reviewers and Action Editor for recommending acceptance of our work. We have uploaded the camera-ready version of the paper. The following changes are made: - Deanonymized the submission and added code link. - Fixed minor typos. - Made the minor table formatting changes suggested by Reviewer 8apX. - Incorporated Table A (from rebuttal Reviewer DSBP) and Pearson correlation (rebuttal Reviewer 8apX) in the main text. - Incorporated all other rebuttals (hyperparameter sensitivity analysis, t-SNE plots, Selective Augmentation analysis) in the Appendix as different sections. We once again thank the Reviewers and Action Editor for their insightful comments. It has greatly improved the quality of our work.
Code: https://github.com/debarshigit/PromptMargin
Assigned Action Editor: ~Eleni_Triantafillou1
Submission Number: 3435
Loading