Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural NetworksDownload PDF

22 Sept 2022 (modified: 14 Jul 2025)ICLR 2023 Conference Desk Rejected SubmissionReaders: Everyone
Keywords: warm-start, generalization, online learning, weight reinitialization, active forgetting, Anytime learning
TL;DR: An efficient online learning paradigm which interchanges between the unlearning phase (selective forgetting) and relearning phase (retraining) to improve generalization of the DNNs through weight reinitialization.
Abstract: Deep neural networks (DNNs) are often trained with the premise that the complete training data set is provided ahead of time. However, in real-world scenarios, data often arrive in chunks over time. This leads to important considerations about the optimal strategy for training DNNs, such as whether to fine-tune them with each chunk of incoming data (warm-start) or to retrain them from scratch with the entire corpus of data whenever a new chunk is available. While employing the latter for training can be computationally inefficient, recent work has pointed out the lack of generalization in warm-start models. Therefore, to strike a balance between efficiency and generalization, we introduce Learn, Unlearn, and Relearn (LURE) an online learning paradigm for DNNs. LURE interchanges between the unlearning phase, which selectively forgets the undesirable information in the model through weight reinitialization in a data-dependent manner, and the relearning phase, which emphasizes learning on generalizable features. We show that our training paradigm provides consistent performance gains across datasets in both classification and few-shot settings. We further show that it leads to more robust and well-calibrated models.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/learn-unlearn-and-relearn-an-online-learning/code)
1 Reply

Loading