Taming "data-hungry" reinforcement learning? Stability in continuous state-action spaces

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, continuous control, stability analysis
TL;DR: We introduce an RL framework for continuous state-action spaces with faster convergence rates than previous ones. Key to this are stability conditions of the Bellman operator and occupation measures that are prevalent in continuous domain MDPs.
Abstract: We introduce a novel framework for analyzing reinforcement learning (RL) in continuous state-action spaces, and use it to prove fast rates of convergence in both off-line and on-line settings. Our analysis highlights two key stability properties, relating to how changes in value functions and/or policies affect the Bellman operator and occupation measures. We argue that these properties are satisfied in many continuous state-action Markov decision processes. Our analysis also offers fresh perspectives on the roles of pessimism and optimism in off-line and on-line RL.
Supplementary Material: zip
Primary Area: Reinforcement learning
Submission Number: 15329
Loading