Fully Parameterized Quantile Function for Distributional Reinforcement LearningDownload PDF

Derek C Yang, Li Zhao, Zichuan Lin, Tao Qin, Jiang Bian, Tieyan Liu

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function or quantile function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the probability side and the value side, for distributional RL. Our algorithm contains a probability proposal network that generates a discrete set of probabilities and a quantile network that gives corresponding quantile values. The two networks are jointly trained to better approximate the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment.
CMT Num: 3340
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview