Prioritizing Samples in Reinforcement Learning with Reducible LossDownload PDF

08 Oct 2022 (modified: 22 Oct 2023)Deep RL Workshop 2022Readers: Everyone
Keywords: reinforcement learning, sample efficiency, experience replay
TL;DR: We propose a prioritization scheme for experience replay based on the potential for loss reduction of a data point.
Abstract: Most reinforcement learning algorithms take advantage of an experience replay buffer to repeatedly train on samples the agent has observed in the past. This prevents catastrophic forgetting, however simply assigning equal importance to each of the samples is a naive strategy. In this paper, we propose a method to prioritize samples based on how much we can learn from a sample. We define the learn-ability of a sample as the steady decrease of the training loss associated with this sample over time. We develop an algorithm to prioritize samples with high learn-ability, while assigning lower priority to those that are hard-to-learn, typically caused by noise or stochasticity. We empirically show that our method is more robust than random sampling and also better than just prioritizing with respect to the training loss, i.e. the temporal difference loss, which is used in vanilla prioritized experience replay.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2208.10483/code)
0 Replies

Loading