Abstract: In the past few years, off-policy reinforcement learning methods have shown promising results in their application to robot control. Q-learning based methods, however, still suffer from poor data-efficiency and are susceptible to stochasticity or noise in the immediate reward, which is limiting with regard to real-world applications. We alleviate this problem by proposing two novel off-policy Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first $n$ steps of a target-policy rollout with respect to the full action-value and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. This decomposition allows us to optimize both parts with their individual learning rates, achieving significant learning speedup and robustness to variance in the reward signal, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and furthermore employ Composite Q-learning within TD3. We compare Composite TD3 with TD3 and TD3($\Delta$), which we introduce as an off-policy variant of TD($\Delta$). Moreover, we show that Composite TD3 outperforms TD3 as well as TD3($\Delta$) significantly in terms of data-efficiency in multiple simulated robot tasks and that Composite Q-learning is robust to stochastic immediate rewards.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: In accordance with the suggestion of the Action Editor, we added a comparison to two multi-step baselines to the appendix: (1) TD3 + n-step without importance correction and (2) TD3 with Model-based Value Expansion. We further added a link to a public github-repository which includes all important source.
Assigned Action Editor: ~Shixiang_Gu1
Submission Number: 61