DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a dynamic Continual Test-Time Adaptation (CTTA) setup and a novel CTTA method DPCore.
Abstract: Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains. While existing CTTA methods assume structured domain changes with uniform durations, real-world environments often exhibit dynamic patterns where domains recur with varying frequencies and durations. Current approaches, which adapt the same parameters across different domains, struggle in such dynamic conditions—they face convergence issues with brief domain exposures, risk forgetting previously learned knowledge, or misapplying it to irrelevant domains. To remedy this, we propose **DPCore**, a method designed for robust performance across diverse domain change patterns while ensuring computational efficiency. DPCore integrates three key components: Visual Prompt Adaptation for efficient domain alignment, a Prompt Coreset for knowledge preservation, and a Dynamic Update mechanism that intelligently adjusts existing prompts for similar domains while creating new ones for substantially different domains. Extensive experiments on four benchmarks demonstrate that DPCore consistently outperforms various CTTA methods, achieving state-of-the-art performance in both structured and dynamic settings while reducing trainable parameters by 99% and computation time by 64% compared to previous approaches.
Lay Summary: AI models, like those in autonomous vehicles, often falter when moving from familiar training grounds to the real world’s ever-changing conditions—sunny to rain, fog to tunnels. Current adaptation techniques aren't built for such dynamic, unpredictable shifts, often leading to errors, forgetting past lessons, or misusing learned knowledge when domains change rapidly or briefly appear. We introduce DPCore, a novel method that helps AI adapt efficiently to these challenges. DPCore uses adaptable "visual prompts" – small, adjustable instructions for the AI – and maintains a "prompt coreset," which is a streamlined memory of key visual characteristics from past environments. When faced with a new situation, DPCore intelligently decides whether to adjust an existing prompt from its memory if the scene is similar to something seen before, or create a fresh prompt if the environment is distinctly new. DPCore enables AI to maintain strong performance even as surroundings change rapidly and erratically, significantly outperforming previous methods in these realistic dynamic settings. Remarkably, it achieves this with 99% fewer adaptable parts and 64% less computation time compared to earlier approaches. Our work also introduces a more realistic way to test AI adaptation, which we call "Continual Dynamic Change," better reflecting real-world complexities.
Link To Code: https://github.com/yunbeizhang/DPCore
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Test-Time Adaptation, Continual Test-Time Adaptation, Visual Prompt, Transfer Learning
Submission Number: 8477
Loading