Concept-Driven Continual Learning

Published: 10 Sept 2024, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper introduces two novel solutions to the challenge of catastrophic forgetting in continual learning: Interpretability Guided Continual Learning (IG-CL) and Intrinsically Interpretable Neural Network (IN2). These frameworks bring interpretability into continual learning, systematically managing human-understandable concepts within neural network models to enhance knowledge retention from previous tasks. Our methods are designed to enhance interpretability, providing transparency and control over the continual training process. While our primary focus is to provide a new framework to design continual learning algorithms based on interpretability instead of improving performance, we observe that our methods often surpass existing ones: IG-CL employs interpretability tools to guide neural networks, showing an improvement of up to 1.4% in average incremental accuracy over existing methods; IN2, inspired by the Concept Bottleneck Model, adeptly adjusts concept units for both new and existing tasks, reducing average incremental forgetting by up to 9.1%. Both our frameworks demonstrate superior performance compared to exemplar-free methods, are competitive with exemplar-based methods, and can further improve their performance by up to 18% when combined with exemplar-based strategies. Additionally, IG-CL and IN2 are memory-efficient as they do not require extra memory space for storing data from previous tasks. These advancements mark a promising new direction in continual learning through enhanced interpretability.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=3BzsAW14Ud
Changes Since Last Submission: Following Action Editor’s request, we revise our paper in the following points: 1. Report the hyperparameter tuning process in Appendix D.1. 2. Clarify the statistically significant improvement in experiment results. We also emphasize the main goals of our work is to bring interpretability into continual learning algorithms to address the catastrophic forgetting issue. Specifically, we propose two new interpretable continual learning frameworks IG-CL and IN2. While our primary focus is on introducing a new framework to design continual learning algorithms based on interpretability instead of the performance enhancement, our findings demonstrate that our methods frequently outperform existing methods.
Assigned Action Editor: ~Andrew_Kyle_Lampinen1
Submission Number: 2830
Loading