Batch Sampling for Experience ReplayOpen Website

Published: 01 Jan 2024, Last Modified: 23 Jan 2024COMAD/CODS 2024Readers: Everyone
Abstract: Continual learning aims to build models that adapt to a sequence of tasks while preserving prior knowledge. Experience Replay, a strategy that involves the retention and replay of training samples from previous tasks, is one of the best approaches to mitigate the issue of catastrophic forgetting. Current replay-based methods typically select samples from the memory for replay based on their individual properties, overlooking the collective impact of a sample batch. To address this shortcoming, we introduce Batch Cosine Distance, a novel metric that measures changes in hidden representations within a batch before and after a model update. This metric not only identifies the samples most susceptible to forgetting, but also quantifies the diversity of affected regions in their embeddings. To empirically validate our metric, we propose Random Batch Sampling, a proof-of-concept method that ranks a small number of random batches sampled from the memory, selecting the highest scoring batch for replay based on the proposed metric. Despite its simplicity, our approach demonstrates competitive performance with sophisticated approaches such as MIR when evaluated on MNIST and CIFAR-10 datasets across various memory sizes. This study underscores the untapped potential of batch-oriented selection methods, offering a new direction for future investigation.
0 Replies

Loading