Evaluating Knowledge Retention in Continual Learning

Published: 01 Jan 2024, Last Modified: 01 Mar 2025SAC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Continual learning presents a machine learning paradigm in which models are expected to learn new tasks sequentially without forgetting previously acquired knowledge. Measuring the quality of this preserved knowledge remains a challenge, particularly when the model has access to a limited number of prior experiences and risks overfitting rather than enhancing generalization. To address this issue, we introduce a novel Knowledge Retention metric capable of determining whether a model retains useful information or merely optimizes for short-term performance. For memory-based continual learning approaches, the proposed metric helps identify if the evaluated method uses memory for re-learning the tasks from scratch, providing valuable information about both the memory and the method that uses it.
Loading