Semantic Routing in Pretrained Vision Models for Online Domain-Incremental Learning

ICLR 2026 Conference Submission18120 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generalization, Continual learning, domain-incremental learning, pretrained models, catastrophic forgetting
TL;DR: Equipping frozen pre-trained vision encoders with semantic adapter heads to boost generalization and adaptation in online domain-incremental learning.
Abstract: Learning in the real world requires models to evolve with changing environments and cope with diverse forms of distribution shift. This is especially challenging in online domain-incremental learning, where data arrive as a non-stationary stream, each sample can be seen only once, and past observations cannot be revisited. Although pre-trained models can be used to obtain strong initial representations, standard fine-tuning in this setting leads to forgetting and poor cross-domain generalization. Inspired by how the human brain organizes experiences around semantic concepts, we propose Semantic Adapters (SAD)—lightweight modules plugged on top of any frozen pre-trained vision encoder that leverage structured semantic knowledge to guide representation updates. By routing the updates toward semantic clusters rather than domains, SAD stabilizes learning while enabling fast, one-pass adaptation. To further enrich flexibility, we introduce SADLoRA, which augments heads with low-rank parameter updates within the encoder, further enhancing adaptability while maintaining efficiency. Extensive experiments across diverse domain shifts show that both SAD versions substantially reduces forgetting and accelerates adaptation. The proposed Semantic routing with targeted updation offers a simple, fast, scalable and a viable solution for robust continual adaptation in dynamic real-world scenarios.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 18120
Loading