Time-Sensitive Replay for Continual Learning

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Continual Learning, Replay Learning, Task-Free Learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose a continual learning system that introduces replay in a time-sensitive manner to reduce model training time without the need for task definitions.
Abstract: Continual learning closely emulates the process of human learning, which allows a model to learn for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks. Replay-based continual learning methods reintroduce examples from previous tasks to mitigate catastrophic forgetting. However, current replay-based methods often unnecessarily reintroduce training examples, leading to inefficiency, and require task information prior to training, which requires preceding knowledge of the training data stream. We propose a novel replay method, Time-Sensitive Replay (TSR), that reduces the number of replayed examples while maintaining accuracy. TSR detects drift in the model's prediction when learning a task and preemptively prevents forgetting events by reintroducing previously encountered examples to the training set. We extend this method to a task-free setting with Task-Free TSR (TF-TSR). In our experiments on benchmark datasets, our approach trains 23\% to 25\% faster than current task-based continual learning methods and 48\% to 58\% faster than task-free methods while maintaining accuracy.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3148
Loading