Enhanced Gradient Aligned Continual Learning via Pareto Optimization

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: continual learning
Abstract: Catastrophic forgetting remains a core challenge in continual learning (CL), whereby the models struggle to retain previous knowledge when learning new tasks. While existing gradient-alignment-based CL methods have been proposed to tackle this challenge by aligning gradients between previous and current tasks, they do not carefully consider the interdependence between previously learned tasks and fully explore the potential of seen tasks. Against this issue, we first adopt the MiniMax theorem and reformulate the existing commonly-adopted gradient alignment optimization problem in a gradient weighting framework. Then we incorporate the Pareto optimality to capture the interrelationship among previously learned tasks, and design a Pareto regularized gradient alignment algorithm (PRGA), which effectively enhances the overall performance of past tasks while ensuring the performance of the current task. Comprehensive empirical results demonstrate that the proposed PRGA outperforms current state-of-the-art continual learning methods across multiple datasets and different settings.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4436
Loading