Abstract: Continual learning presents a machine learning paradigm in which models are expected to learn new tasks sequentially without forgetting previously acquired knowledge. Measuring the quality of this preserved knowledge remains a challenge, particularly when the model has access to a limited number of prior experiences and risks overfitting rather than enhancing generalization. To address this issue, we introduce a novel Knowledge Retention metric capable of determining whether a model retains useful information or merely optimizes for short-term performance. For memory-based continual learning approaches, the proposed metric helps identify if the evaluated method uses memory for re-learning the tasks from scratch, providing valuable information about both the memory and the method that uses it.
Loading