Learning Through Consistency for Prompt Tuning

Published: 01 Nov 2023, Last Modified: 12 Dec 2023R0-FoMo SpotlightEveryoneRevisionsBibTeX
Keywords: Zero-shot Learning, Few-shot Learning, Prompt Learning, Vision-language Model
TL;DR: We propose a new prompt-tuning method that enforces a consistency constraint to learn a new task in the few-shot setting without losing the zero-shot generalizability of the foundation model.
Abstract: We propose Consistency-guided Prompt learning (CoPrompt), a new fine-tuning method for vision-language models that addresses the challenge of improving the generalization capability of large foundation models while fine-tuning them on downstream tasks in a few-shot setting. The basic idea of CoPrompt is to enforce a consistency constraint in the prediction of the trainable and pre-trained models to prevent overfitting on the downstream task. Additionally, we introduce the following two components into our consistency constraint to further boost the performance: enforcing consistency on two perturbed inputs and combining two dominant paradigms of tuning, prompting and adapter. Enforcing consistency on perturbed input further regularizes the consistency constraint, effectively improving generalization, while tuning additional parameters with prompting and adapters improves the performance on downstream tasks. Extensive experiments show that CoPrompt outperforms existing methods on a range of evaluation suites, including base-to-novel generalization, domain generalization, and cross-dataset evaluation tasks. On the generalization task, CoPrompt improves the state-of-the-art by 2.09\% on the zero-shot task and 1.93\% on the harmonic mean over 11 recognition datasets. Detailed ablation studies show the effectiveness of each of the components in CoPrompt.
Submission Number: 4
Loading