Invariance as A Necessary Condition for Online Continual Learning

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: continual learning, online continual learning
Abstract: Traditional supervised learning aims to learn only features that are sufficient to classify the current given classes. This is highly problematic for continual learning (CL), which learns a sequence of tasks incrementally. It is also a major cause for catastrophic forgetting (CF). Although numerous CL methods have been proposed to mitigate CF, theoretical understanding of the problem is still limited. Recent work showed that if the CL learner can learn as many features as possible from the data (dubbed holistic representations), CF can be significantly reduced. This paper shows that learning holistic representations is insufficient and it is also necessary to learn invariant representations because many features in the data are irrelevant or variant, and learning them may also cause CF. This paper studies it both theoretically and empirically. A novel invariant feature learning method related to causal inference theory is proposed for online CL, which boosts online CL performance markedly.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2142
Loading