Learning Generalized Policy Automata for Relational Stochastic Shortest Path ProblemsDownload PDF

Published: 31 Oct 2022, Last Modified: 16 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Generalization, Sequential Decision Making, Transfer Learning, Stochastic Shortest Path Problems, Relational Abstractions, Model-based Policy Learning
TL;DR: We present an approach that uses relational abstractions for few-shot learning of generalized policies for SSPs that can be used to quickly solve larger SSPs containing more objects while guaranteeing completeness and hierarchical optimality.
Abstract: Several goal-oriented problems in the real-world can be naturally expressed as Stochastic Shortest Path problems (SSPs). However, the computational complexity of solving SSPs makes finding solutions to even moderately sized problems intractable. State-of-the-art SSP solvers are unable to learn generalized solutions or policies that would solve multiple problem instances with different object names and/or quantities. This paper presents an approach for learning \emph{Generalized Policy Automata} (GPA): non-deterministic partial policies that can be used to catalyze the solution process. GPAs are learned using relational, feature-based abstractions, which makes them applicable on broad classes of related problems with different object names and quantities. Theoretical analysis of this approach shows that it guarantees completeness and hierarchical optimality. Empirical analysis shows that this approach effectively learns broadly applicable policy knowledge in a few-shot fashion and significantly outperforms state-of-the-art SSP solvers on test problems whose object counts are far greater than those used during training.
Supplementary Material: pdf
16 Replies

Loading