Keywords: temporal domain generalization, prompt-tuning
TL;DR: The paper proposes a novel and scalable method adapting to data drift over time in machine learning using drift-aware prompts, enhancing model performance across various tasks without re-training.
Abstract: Machine learning models typically assume that training and testing data follow the same independent and identically distributed (i.i.d.) distribution. However, in real-world deployment, data often evolves over time. Addressing this challenge requires models that can efficiently adapt at test time without retraining. This paper introduces a prompting-based test-time adaptation framework for temporal domain generalization that enables pre-trained models to efficiently adapt to evolving distributions without re-training. Our method is both parameter- and time-efficient, leveraging global prompts, domain-specific prompts, and drift-aware prompts to model and forecast temporal shifts in data distributions. By extrapolating these learned adaptations, our approach enables pre-trained models being adaptive to dynamic environments. We demonstrate the adaptability, scalability and generality of our framework across classification, regression, time-series forecasting, and NLP tasks, highlighting its effectiveness in adapting foundation models to real-world temporal shifts.
Submission Number: 3
Loading