Bridging the gap between offline and online continual learning

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Continual Learning; Lifelong Learning; Online Continual Learning; Class-incremental Learning; Task-free Continual Learning; Offline Continual Learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: This work provides a theoretical framework to unify online and offline continual learning showing online CL leads to tighter generalization bound.
Abstract: Instead of training deep neural networks offline with a large static dataset, continual learning (CL) considers a new learning paradigm, which continually trains the deep networks from a non-stationary data stream on the fly. Despite the recent progress, continual learning remains an open challenge. Many CL techniques still require offline training of large batches of data chunks (i.e., tasks) over multiple epochs. Conventional wisdom holds that online continual learning, which assumes single-pass data, is strictly harder than offline continual learning, due to the combined challenges of catastrophic forgetting and underfitting within a single training epoch. Here, we challenge this assumption by empirically demonstrating that online CL can match or exceed the performance of its offline counterpart given equivalent memory and computational resources. This finding is further verified across different CL approaches and benchmarks. To better understand these counterintuitive experimental findings, we design a framework to unify and interpolate between online and offline CL and provide a theoretical analysis showing that online CL can yield a tighter generalization bound than offline CL.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2150
Loading