Abstract: Prompt learning has demonstrated remarkable performance in tuning Vision-Language Models (VLMs) for various downstream tasks. Recent studies have shown the effectiveness of prompt distillation in transferring distribution knowledge between VLM teachers and students. However, existing prompt knowledge distillation methods are limited in diversity, focusing solely on positive probabilities. In this paper, we propose a dual prompt distillation (DPD) method, which teaches the student from both positive and negative aspects. Specifically, during the first phase of teacher training, the positive and negative prompts are both optimized by constructing complementary probability distribution signals. In the second distillation phase, the teacher guides the student with dual prompts - positive prompts to select the correct category and negative prompts to exclude incorrect ones. Extensive experimental results across 11 datasets demonstrate that the proposed DPD method either surpasses or matches the performance of existing state-of-the-art (SOTA) methods in both few-shot learning and domain generalization tasks while maintaining competitive computational efficiency. The corresponding code is available at https://github.com/wdinancy/DPD.
External IDs:dblp:conf/ictai/WangWX25
Loading