Robust and Data-efficient Q-learning by Composite Value-estimation

Published: 07 Jul 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In the past few years, off-policy reinforcement learning methods have shown promising results in their application to robot control. Q-learning based methods, however, still suffer from poor data-efficiency and are susceptible to stochasticity or noise in the immediate reward, which is limiting with regard to real-world applications. We alleviate this problem by proposing two novel off-policy Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first $n$ steps of a target-policy rollout with respect to the full action-value and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. This decomposition allows us to optimize both parts with their individual learning rates, achieving significant learning speedup and robustness to variance in the reward signal, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and furthermore employ Composite Q-learning within TD3. We compare Composite TD3 with TD3 and TD3($\Delta$), which we introduce as an off-policy variant of TD($\Delta$). Moreover, we show that Composite TD3 outperforms TD3 as well as TD3($\Delta$) significantly in terms of data-efficiency in multiple simulated robot tasks and that Composite Q-learning is robust to stochastic immediate rewards.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: In accordance with the suggestion of the Action Editor, we added a comparison to two multi-step baselines to the appendix: (1) TD3 + n-step without importance correction and (2) TD3 with Model-based Value Expansion. We further added a link to a public github-repository which includes all important source.
Code: https://github.com/NrLabFreiburg/composite-q-learning
Assigned Action Editor: ~Shixiang_Gu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 61
Loading