Learning Explainable and Better Performing Representations of POMDP Strategies

Published: 01 Jan 2024, Last Modified: 19 Jul 2024TACAS (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Strategies for partially observable Markov decision processes (POMDP) typically require memory. One way to represent this memory is via automata. We present a method to learn an automaton representation of a strategy using a modification of the \(L^*\)-algorithm. Compared to the tabular representation of a strategy, the resulting automaton is dramatically smaller and thus also more explainable. Moreover, in the learning process, our heuristics may even improve the strategy’s performance. We compare our approach to an existing approach that synthesizes an automaton directly from the POMDP, thereby solving it. Our experiments show that our approach can lead to significant improvements in the size and quality of the resulting strategy representations.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview