Concept-Driven Continual Learning

TMLR Paper2830 Authors

08 Jun 2024 (modified: 12 Aug 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper introduces novel solutions to the challenge of catastrophic forgetting in continual learning: Interpretability Guided Continual Learning (IG-CL) and Intrinsically Interpretable Neural Network (IN2). These frameworks bring interpretability into continual learning, systematically managing human-understandable concepts within neural network models to enhance knowledge retention from previous tasks. Our methods are designed to enhance interpretability, providing transparency and control over the continual training process. While our primary focus is to provide a new framework to design continual learning algorithms based on interpretability instead of improving performance, we observe that our methods often surpass existing ones: IG-CL employs interpretability tools to guide neural networks, showing an improvement of up to 1.4% in average incremental accuracy over existing methods; IN2, inspired by the Concept Bottleneck Model, adeptly adjusts concept units for both new and existing tasks, reducing average incremental forgetting by up to 9.1%. Both frameworks demonstrate superior performance compared to exemplar-free methods and are competitive with exemplar-based methods. When combined with exemplar-based strategies, they further improve the performance by up to 18%. These advancements represent a significant step in addressing the limitations of current continual learning methods, offering efficient and interpretable approaches that do not require additional memory for past data.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=3BzsAW14Ud
Changes Since Last Submission: Following Action Editor’s request, we revise our paper in the following points: 1. Report the hyperparameter tuning process in Appendix D.1. 2. Clarify the statistically significant improvement in experiment results. We also emphasize the main goals of our work is to bring interpretability into continual learning algorithms to address the catastrophic forgetting issue. Specifically, we propose two new interpretable continual learning frameworks IG-CL and IN2. While our primary focus is on introducing a new framework to design continual learning algorithms based on interpretability instead of the performance enhancement, our findings demonstrate that our methods frequently outperform existing methods.
Assigned Action Editor: ~Andrew_Kyle_Lampinen1
Submission Number: 2830
Loading