Variance Double-Down: The Small Batch Size Anomaly in Multistep Deep Reinforcement LearningDownload PDF

08 Oct 2022, 17:47 (modified: 09 Dec 2022, 14:31)Deep RL Workshop 2022Readers: Everyone
Keywords: Reinforcement Learning, Deep Reinforcement Learning, Value based, Batch Size, Multi step learning
TL;DR: We perform an exhaustive investigation into the interplay of batch size and update horizon and uncover a surprising phenomenon: when increasing the update horizon, it is more beneficial to decrease the batch size
Abstract: In deep reinforcement learning, multi-step learning is almost unavoidable to achieve state-of-the-art performance. However, the increased variance that multistep learning brings makes it difficult to increase the update horizon beyond relatively small numbers. In this paper, we report the counterintuitive finding that decreasing the batch size parameter improves the performance of many standard deep RL agents that use multi-step learning. It is well-known that gradient variance decreases with increasing batch sizes, so obtaining improved performance by increasing variance on two fronts is a rather surprising finding. We conduct a broad set of experiments to better understand what we call the variance doubledown phenomenon.
0 Replies

Loading