Bound and Average: Leveraging Weights as Knowledge for Class Incremental Learning

16 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: continual learning, transfer learning
Abstract: We present a novel training approach, named Bound-and-Average (BaA) for Class Incremental Learning (CIL) that leverages weight ensemble and constrained optimization, motivated by recent advances in transfer learning. Our algorithm contains two types of weight averaging: inter-task weight averaging and intra-task weight averaging. Inter-task weight averaging integrates the ability of previous models by averaging the weights of models from all previous stages. On the other hand, intra-task weight averaging enriches the learning of current task by averaging the model parameters within current stage. We also propose a bounded update technique that aims to optimize the target model with minimal cumulative updates and preserve knowledge from previous tasks; this strategy reveals that it is possible to effectively obtain new models near old ones, reducing catastrophic forgetting. BaA seamlessly integrates into existing CIL methods without modifying architecture components or revising learning objectives. We extensively evaluate our algorithm on standard CIL benchmarks and demonstrate superior performance compared to state-of-the-art methods.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 686
Loading