The Geometry of Memoryless Stochastic Policy Optimization in Infinite-Horizon POMDPsDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 PosterReaders: Everyone
Keywords: POMDPs, Memoryless Policies, Critical points, State-action frequencies, Algebraic degree
Abstract: We consider the problem of finding the best memoryless stochastic policy for an infinite-horizon partially observable Markov decision process (POMDP) with finite state and action spaces with respect to either the discounted or mean reward criterion. We show that the (discounted) state-action frequencies and the expected cumulative reward are rational functions of the policy, whereby the degree is determined by the degree of partial observability. We then describe the optimization problem as a linear optimization problem in the space of feasible state-action frequencies subject to polynomial constraints that we characterize explicitly. This allows us to address the combinatorial and geometric complexity of the optimization problem using recent tools from polynomial optimization. In particular, we demonstrate how the partial observability constraints can lead to multiple smooth and non-smooth local optimizers and we estimate the number of critical points.
One-sentence Summary: We provide an explicit description of the optimization problem and derive bounds on the number of critical points in POMDPs with memoryless stochastic policies depending on the degree of observability.
Supplementary Material: zip
15 Replies

Loading