Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks

Published: 20 Feb 2023, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep neural networks (DNNs) are often trained on the premise that the complete training data set is provided ahead of time. However, in real-world scenarios, data often arrive in chunks over time. This leads to important considerations about the optimal strategy for training DNNs, such as whether to fine-tune them with each chunk of incoming data (warm-start) or to retrain them from scratch with the entire corpus of data whenever a new chunk is available. While employing the latter for training can be resource-intensive, recent work has pointed out the lack of generalization in warm-start models. Therefore, to strike a balance between efficiency and generalization, we introduce "Learn, Unlearn, and Relearn (LURE)" an online learning paradigm for DNNs. LURE interchanges between the unlearning phase, which selectively forgets the undesirable information in the model through weight reinitialization in a data-dependent manner, and the relearning phase, which emphasizes learning on generalizable features. We show that our training paradigm provides consistent performance gains across datasets in both classification and few-shot settings. We further show that it leads to more robust and well-calibrated models.
Submission Length: Regular submission (no more than 12 pages of main content)
Video: https://www.youtube.com/watch?v=efulZvJBZ04&ab_channel=NeurAI
Code: https://github.com/NeurAI-Lab/LURE
Assigned Action Editor: ~Neil_Houlsby1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 518
Loading