Meta-learning with Prototypical Query Sample Editing for Low Resource Continual LearningDownload PDF

Anonymous

16 Dec 2022 (modified: 05 May 2023)ACL ARR 2022 December Blind SubmissionReaders: Everyone
Abstract: Memory replay, also known as experience rehearsal, is a mainstream method to retain knowledge in continual learning (CL), especially in NLP. Recently, meta-learning has been introduced to augment memory replay. It serves to enable efficient knowledge transfer, thereby alleviating catastrophic forgetting. However, memory replay is often episodic to avoid over-fitting to past samples. Its sparse occurrence also limits the convergence of model on past samples, especially in low resource scenarios. This paper aims to fully exploit the potential of meta-learning. We study the feasibility of solely using meta-learning to solve lifelong language learning. We propose an optimization-based meta-learning framework for CL in accordance with MAML. In particular, we edit query information for meta-learning via a prototypical network. The meta-objective is modified to depict a CL scenario. We conduct extensive experiments on benchmark text classification datasets. The results testify the superiority of our method in terms of forgetting mitigation, fast adaptation and memory efficiency in a low resource NLP scenario.
Paper Type: long
Research Area: Machine Learning for NLP
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview