Abstract: Recent continual learning (CL) models use meta learning to enable efficient cross-domain knowledge transfer and thus enhance sparse experience rehearsal (or called episodic memory replay). Whereas, the knowledge transfer can be constrained by its episodically occurrence, especially when the training sets are small or/and the replay frequency is low (usually 1%). This paper studies the feasibility of solely using meta learning to address CL problems. In particular, we devise an optimisation-based meta learning framework for CL in accordance with MAML, where query samples are edited for generalisation of learned knowledge. We conduct extensive experiments on text classification in a low resource CL setup, where we downsize the training set to its 10%. The experimental results demonstrate the superiority of our method in terms of stability, fast adaptation, memory efficiency and knowledge retention across various domains.
Paper Type: long
1 Reply
Loading