Recurrent Natural Policy Gradient for POMDPs

Published: 01 Aug 2024, Last Modified: 09 Oct 2024EWRL17EveryoneRevisionsBibTeXCC BY 4.0
Keywords: natural policy gradient, partially-observable Markov decision processes, partial observability, policy optimization, actor-critic, temporal difference learning
TL;DR: In this paper, we study a natural policy gradient method with recurrent neural networks for POMDPs, and establish finite-time performance guarantees.
Abstract: In this paper, we study a natural policy gradient method based on recurrent neural networks (RNNs) for partially-observable Markov decision processes, whereby RNNs are used for policy parameterization and policy evaluation to address curse of dimensionality in reinforcement learning for POMDPs. We present finite-time and finite-width analyses for both the critic (recurrent temporal difference learning), and correspondingly-operated recurrent natural policy gradient method in the near-initialization regime. Our analysis demonstrates the efficiency of RNNs for problems with short-term memory with explicit bounds on the required network widths and sample complexity, and points out the challenges in the case of long-term dependencies.
Submission Number: 149
Loading