Keywords: Prompt generation, Lifelong continual adaptation, Diffusion model, Foundation model
Abstract: Machine learning models deployed in dynamic environments often face distribution shifts that are not entirely novel but instead recurring and long-term. To capture this practical scenario, we introduce Lifelong Continual Adaptation (LCA), where models are trained on multiple domains and then deployed in sequential streams in which these domains recur over time. Because such recurrences can be anticipated, LCA seeks to reuse domain-specific knowledge without retraining whenever a domain reappears. While similar in using sequential test streams, continual Test-Time Adaptation (TTA) assumes each domain is unseen, i.e., out-of-distribution (OOD). Applied to LCA, its reliance on online unsupervised training is dispensable (no novel domains to relearn), unstable (errors accumulate across recurrences), and inefficient (due to costly backpropagation on large models). To overcome these issues, we propose DiffPrompt, a diffusion-based prompt generation framework that produces domain-specific prompts to guide a frozen vision foundation model. A conditional diffusion model learns the distribution of prompts across domains during training and generates prompts conditioned on incoming data batches during deployment. Experiments on DomainNet and ImageNet-C show that DiffPrompt achieves stable and efficient adaptation, outperforming ERM and continual TTA baselines and validating LCA as a realistic and non-trivial setting.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 9737
Loading