Testing the limits of data efficiency in experience replay

Published: 10 Oct 2024, Last Modified: 27 Oct 2024Continual FoMo PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Rehearsal-based Methods, Knowledge Distillation (KD), Data Efficiency
TL;DR: This paper investigates the role of logits quality in destillation based rehearsal methods for continual learning.
Abstract:

In Continual Learning rehearsal-based methods store a subset of observed data in a buffer for replay during training. The computational efficiency of these methods is tied to their data efficiency, i.e., the size of their buffer. In this work we expose a nuanced picture of rehearsal, underscoring the role of implicit biases on the road towards scalable CL.

Submission Number: 5
Loading