Keywords: fm-based continual learning, peft, neuroscience
TL;DR: We propose a new parameter-efficient tuning (PEFT) module for foundation models and apply it to class-incremental learning through a neuro-inspired mechanism.
Abstract: Foundation model-based class-incremental learners achieve strong performance but still struggle with the stability-plasticity trade-off; excessive plasticity heavily modifies the general knowledge of the pre-trained model and causes forgetting, and excessive stability hinders adaptation to new classes. This necessitates an effective adaptation that introduces minimal yet functional modifications. To address this, we first introduce a new parameter-efficient fine-tuning module ‘Learn and Calibrate’, or LuCA, designed to acquire task-specific knowledge through an adapter-calibrator couple, enabling well-refined feature representations. Then, for each task, we deploy a sparse LuCA module on top of the last classification token [CLS] just before the classifier, which we refer to as ‘Token-level Sparse Calibration and Adaptation’, or TOSCA. By leaving the generalization capabilities of the foundation models intact and adapting exclusively via the last token, our approach achieves a harmonious balance between stability and plasticity while reducing both training and inference complexity. We demonstrate that TOSCA yields state-of-the-art performance while introducing ~8× fewer parameters.
Serve As Reviewer: ~Murat_Onur_Yildirim1
Submission Number: 36
Loading