Keywords: continual learning, adaptation, efficiency
TL;DR: We propose a simple yet strong baseline to challenge and evaluate progress on FM-based CIL
Abstract: Continual adaptation is essential for maintaining the relevance of foundation models (FMs) as data and tasks evolve. While recent research in class-incremental learning (CIL) has primarily focused on adapting pre-trained large-scale transformers, it remains unclear how well these methods actually perform compared to lightweight convolutional networks. Without such comparisons, it is difficult to know whether recent advances truly surpass strong ConvNet-based baselines. To address this gap, we propose Slim Adaptation Modules (SAM) which keeps the majority of the pre-trained backbone frozen while enabling rapid and scalable adaptation through sparse task-specific layers. SAM achieves up to a ~5× reduction in trainable parameters and a ~6× reduction in total parameters, substantially lowering the cost of continual updates. Extensive experiments across diverse benchmarks demonstrate that this simple yet strong design not only mitigates catastrophic forgetting but also consistently outperforms state-of-the-art CIL methods, including those based on foundation models. This positions SAM as a simple and strong baseline for guiding future FM-based CIL research
Serve As Reviewer: ~Elif_Ceren_Gok_Yildirim1
Submission Number: 37
Loading