Rethinking Rehearsal in Lifelong Learning: Does An Example Contribute the Plasticity or Stability?Download PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Lifelong Learning, Rehearsal
Abstract: Lifelong Learning (LL) is the sequential transformation of Multi-Task Learning, which learns new tasks in order like human-beings. Traditionally, the primary goal of LL is to achieve the trade-off between the Stability (remembering past tasks) and Plasticity (adapting to new tasks). Rehearsal, seeking to remind the model by storing examples from old tasks in LL, is one of the most effective ways to get such trade-off. However, the Stability and Plasticity (SP) are only evaluated when a model is trained well, and it is still unknown what leads to the final SP in rehearsal-based LL. In this paper, we study the cause of SP from the perspective of example difference. First, we theoretically analyze the example-level SP via the influence function and deduce the influence of each example on the final SP. Moreover, to avoid the calculation burden of Hessian for each example, we propose a simple yet effective MetaSP algorithm to simulate the acquisition of example-level SP. Last but not least, we find that by adjusting the weights of each training example, a solution on the SP Pareto front can be obtained, resulting in a better SP trade-off for LL. Empirical results show that our algorithm significantly outperforms state-of-the-art methods on benchmark LL datasets.
13 Replies

Loading