Iterated $Q$-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning

TMLR Paper3550 Authors

24 Oct 2024 (modified: 24 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The vast majority of Reinforcement Learning methods is largely impacted by the computation effort and data requirements needed to obtain effective estimates of action-value functions, which in turn determine the quality of the overall performance and the sample-efficiency of the learning procedure. Typically, action-value functions are estimated through an iterative scheme that alternates the application of an empirical approximation of the Bellman operator and a subsequent projection step onto a considered function space. It has been observed that this scheme can be potentially generalized to carry out multiple iterations of the Bellman operator at once, benefiting the underlying learning algorithm. However, till now, it has been challenging to effectively implement this idea, especially in high-dimensional problems. In this paper, we introduce iterated $Q$-Network (i-QN), a novel principled approach that enables multiple consecutive Bellman updates by learning a tailored sequence of action-value functions where each serves as the target for the next. We show that i-QN is theoretically grounded and that it can be seamlessly used in value-based and actor-critic methods. We empirically demonstrate the advantages of i-QN in Atari $2600$ games and MuJoCo continuous control problems.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Pablo_Samuel_Castro1
Submission Number: 3550
Loading