Fast and Data-Efficient Training of Rainbow: an Experimental Study on AtariDownload PDF

12 Oct 2021 (modified: 22 Oct 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: reinforcement learning, rainbow, atari, dqn
Abstract: Across the Arcade Learning Environment, Rainbow achieves a level of performance competitive with humans and modern RL algorithms. However, attaining this level of performance requires large amounts of data and hardware resources, making research in this area computationally expensive and use in practical applications often infeasible. This paper's contribution is threefold: We (1) propose an improved version of Rainbow, seeking to drastically reduce Rainbow's data, training time, and compute requirements while maintaining its competitive performance; (2) we empirically demonstrate the effectiveness of our approach through experiments on the Arcade Learning Environment, and (3) we conduct a number of ablation studies to investigate the effect of the individual proposed modifications. Our improved version of Rainbow reaches a median human normalized score close to classic Rainbow's, while using 20 times less data and requiring only 7.5 hours of training time on a single GPU. We also provide our full implementation including pre-trained models.
TL;DR: An improved version of Rainbow with substantially better data efficiency and lower compute requirements.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2111.10247/code)
0 Replies

Loading