Actor Prioritized Experience ReplayDownload PDF

08 Oct 2022, 17:47 (modified: 09 Dec 2022, 14:31)Deep RL Workshop 2022Readers: Everyone
Keywords: deep reinforcement learning, off-policy learning, prioritized experience replay, actor-critic algorithms
TL;DR: This study investigates the poor empirical performance of the experience replay sampling algorithm PER in continuous action spaces and introduces a novel experience replay sampling method to overcome the algorithmic drawbacks of PER.
Abstract: A widely-studied deep reinforcement learning (RL) technique known as Prioritized Experience Replay (PER) allows agents to learn from transitions sampled with non-uniform probability proportional to their temporal-difference (TD) error. Although it has been shown that PER is one of the most crucial components for the overall performance of deep RL methods in discrete action domains, many empirical studies indicate that it considerably underperforms actor-critic algorithms in continuous control. We theoretically show that actor networks cannot be effectively trained with transitions that have large TD errors. As a result, the approximate policy gradient computed under the Q-network diverges from the actual gradient computed under the optimal Q-function. Motivated by this, we introduce a new branch of improvements to PER for actor-critic methods, which also regards issues with stability and recent findings behind the poor empirical performance of the algorithm. An extensive set of experiments verifies our theoretical claims and demonstrates that the introduced method obtains substantial gains over PER.
Supplementary Material: zip
0 Replies