Aggressive Q-Learning with Ensembles: Achieving Both High Sample Efficiency and High Asymptotic PerformanceDownload PDF

08 Oct 2022 (modified: 29 Sept 2024)Deep RL Workshop 2022Readers: Everyone
Keywords: Deep Reinforcement Learning, Off-Policy, Model-Free, Sample Efficiency, Ensembles
TL;DR: We propose a simple model-free algorithm with ensembles that achieves both high sample efficiency and state-of-the-art asymptotic performance.
Abstract: Recent advances in model-free deep reinforcement learning (DRL) show that simple model-free methods can be highly effective in challenging high-dimensional continuous control tasks. In particular, Truncated Quantile Critics (TQC) achieves state-of-the-art asymptotic training performance on the MuJoCo benchmark with a distributional representation of critics; and Randomized Ensemble Double Q-Learning (REDQ) achieves high sample efficiency that is competitive with state-of-the-art model-based methods using a high update-to-data ratio and target randomization. In this paper, we propose a novel model-free algorithm, Aggressive Q-Learning with Ensembles (AQE), which improves the sample-efficiency performance of REDQ and the asymptotic performance of TQC, thereby providing overall state-of-the-art performance during all stages of training. Moreover, AQE is very simple, requiring neither distributional representation of critics nor target randomization. The effectiveness of AQE is further supported by our extensive experiments, ablations, and theoretical results.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/aggressive-q-learning-with-ensembles/code)
0 Replies

Loading