Representation Finetuning for Continual Learning

15 Sept 2025 (modified: 30 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: continual learning, reft, finetuning
Abstract: The world is inherently dynamic, and continual learning aims to enable models to adapt to ever-evolving data streams. Pre-trained models has shown powerful performance in continual learning. However, since pre-trained models acquire knowledge from static datasets, they still require finetuning to adapt effectively to downstream tasks. Traditional finetuning methods are largely empirical, lack explicit objectives, and still require a relatively large number of parameters. In this work, we introduce $\textbf{Co}$ntinual $\textbf{R}$epresentation L$\textbf{e}$arning($\textbf{CoRe}$), a novel framework that, for the first time, applies low-rank linear subspace representation finetuning to continual learning. Unlike conventional finetuning approaches, CoRe adopts a learning paradigm with explicit objectives rather than relying on black-box optimization, achieving more efficient parameter utilization and superior performance. Extensive experiments across multiple continual learning benchmarks demonstrate that CoRe not only preserves parameter efficiency but also significantly outperforms existing methods. Our work extends the applicability of representation finetuning and introduces a new, efficient finetuning paradigm for continual learning.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 5724
Loading