Keywords: Prompt-tuning, knowledge distillation, transfer learning, model adaptation
Abstract: Prompt tuning is a parameter-efficient fine-tuning method designed to address the challenge of adapting large pre-trained models to downstream tasks.However, existing prompt tuning methods exhibit several limitations: (1) difficulties in knowledge transfer when significant domain gaps exist between source and target domains; (2) tendency to forget general knowledge contained in the source model; and (3) susceptibility to overfitting when target data is limited.To address these issues, we adopt the concept of multi-source domain transfer and propose **PCPrompt**, a **P**rogressive **C**onfidence-weighted Multi-source **Prompt** Distillation method for visual prompt tuning under multi-source and few-shot learning setting.By leveraging a confidence-weighted mechanism with knowledge distillation, our approach integrates teacher knowledge according to their respective contributions to the target task.Furthermore, we design dynamic weighting and progressive decay strategies to provide the student with coarse-to-fine guidance throughout the entire training process.Experimental results demonstrate that our meth
Submission Number: 9
Loading