Recursive Reinforcement LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 14 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Recursive Markov Decision Processes, Reinforcement Learning, Probabilistic Pushdown Automata, Probabilistic Context-Free Grammars, Recursive State Machines, Branching Processes
TL;DR: This paper develops reinforcement learning algorithms for environments formalized as recursive Markov decision processes.
Abstract: Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner's ingenuity in designing a suitable "flat" representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning---a model-free RL algorithm for RMDPs---and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions.
Supplementary Material: zip
19 Replies

Loading