In Defense of Prompt-based Continual Learning: Task Interference Mitigation via Confidence-Stratified Classifier Calibration
Keywords: Prompt-based Continual Learning、Continual Learning、task interference
Abstract: Prompt-based Continual Learning is a promising direction that effectively leverages the capabilities of pre-trained models.
Prompt-based methods typically learn task-specific prompts and predicts task-ID to select the prompt during inference.
However, task interference caused by prompt misselection significantly constrain their performance.
Although several studies have proposed improvements to this problem,
they treat all samples equally, without specifically addressing the samples that are primarily responsible for this issue.
In this paper, we first partition samples into three distinct categories according to their different responses to prompt misselection,
including susceptible samples, refractory samples, and resilient samples.
Based on such stratification, we unravel that susceptible samples are the main source of such task interference,
as only their classification results are influenced by prompt misselection, while other samples remain unaffected.
This realization drives us to design a novel prompt-based approach called Confidence-Stratified Classifier Calibration(CoSC),
to mitigate task interference arising from prompt misselection by targeting the root cause.
Specifically, we leverage the unique properties of each sample category to calibrate classifiers of both prompt instruction part and prompt selector,
thereby reducing the exposure of susceptible samples to incorrect prompts.
Extensive experiments show that CoSC outperforms prompt-based counterparts and achieves state-of-the-art performance
across various benchmarks under class-incremental setting.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 12836
Loading