SIESTA: Efficient Online Continual Learning with Sleep

Published: 01 Nov 2023, Last Modified: 01 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In supervised continual learning, a deep neural network (DNN) is updated with an ever-growing data stream. Unlike the offline setting where data is shuffled, we cannot make any distributional assumptions about the data stream. Ideally, only one pass through the dataset is needed for computational efficiency. However, existing methods are inadequate and make many assumptions that cannot be made for real-world applications, while simultaneously failing to improve computational efficiency. In this paper, we propose a novel continual learning method, SIESTA based on wake/sleep framework for training, which is well aligned to the needs of on-device learning. The major goal of SIESTA is to advance compute efficient continual learning so that DNNs can be updated efficiently using far less time and energy. The principal innovations of SIESTA are: 1) rapid online updates using a rehearsal-free, backpropagation-free, and data-driven network update rule during its wake phase, and 2) expedited memory consolidation using a compute-restricted rehearsal policy during its sleep phase. For memory efficiency, SIESTA adapts latent rehearsal using memory indexing from REMIND. Compared to REMIND and prior arts, SIESTA is far more computationally efficient, enabling continual learning on ImageNet-1K in under 2 hours on a single GPU; moreover, in the augmentation-free setting it matches the performance of the offline learner, a milestone critical to driving adoption of continual learning in real-world applications.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We have made a camera ready version with updated publication date and a link to the code.
Code: https://yousuf907.github.io/siestasite
Assigned Action Editor: ~Laurent_Charlin1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1450
Loading