\textsc{PGO-BEN}: Proxy-Guided Orthogonalization and Beta Ensembling for Few-Shot Domain-Incremental Learning

TMLR Paper6328 Authors

28 Oct 2025 (modified: 27 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Continual adaptation to evolving domains with minimal supervision is essential for real-world deployment of machine learning systems. We formalize this objective as \textbf{Few-Shot Domain-Incremental Learning (FSDIL)}, where a model must adapt to each new domain using only a few labeled samples while retaining prior knowledge without access to previous data. This setting mirrors practical constraints in domains such as autonomous driving and medical imaging, where annotations are expensive and data retention is restricted by privacy regulations. Pre-trained vision–language models such as CLIP provide a strong initialization for FSDIL due to their transferable multi-modal representations. However, adapting CLIP incrementally under domain shifts remains challenging: few-shot updates often trigger \emph{catastrophic forgetting} and insufficient \emph{plasticity} across evolving distributions. To address these challenges, we introduce \textbf{\textsc{PGO-BEn}} (\textit{Proxy-Guided Orthogonalization and Beta Ensembling})—a rehearsal-free framework that leverages CLIP’s semantic priors via prompt learning while preserving prior domain knowledge through two key mechanisms. (1) \textbf{Proxy-Guided Orthogonalization (PGO):} identifies conflicts between current gradients and proxy representations of past knowledge, inferred from current samples, and projects conflicting updates into an orthogonal subspace to prevent knowledge degradation. (2) \textbf{Beta Ensembling (BEn):} introduces a Beta-function-based temporal ensembling strategy that adaptively balances stability and plasticity, outperforming conventional exponential moving average (EMA) approaches in retaining early-domain knowledge. We extensively evaluate \textsc{PGO-BEn} on three diverse benchmarks—\textbf{DomainNet}, \textbf{CoRE50}, and \textbf{CDDB-Hard}—and demonstrate consistent improvements over state-of-the-art domain-incremental and few-shot learning methods across all supervision levels in this challenging setting.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Lijun_Zhang1
Submission Number: 6328
Loading