Keywords: Planning, Reinforcement Learning, Generalized Planning, Graph Neural Networks
TL;DR: We propose a redefinition of the RL action space, with the hope of inspiring future work to make it more aligned with the nature of Planning.
Abstract: Many approaches that incorporate Reinforcement Learning to address the Planning problem typically assume a one-to-one mapping between Planning operators and RL actions. In this paper, we introduce the concept of meta-operator as a novel operator resulting from the simultaneous application of multiple planning operators, and we show that including meta-operators in the RL action space yields superior performance compared to purely sequential models. Our research aims to evaluate the performance of these models in domains where satisfactory outcomes have not been previously achieved, and to provide a thorough analysis of how the incorporation of meta-operators, and in general how the enrichment of the RL action space, enhances existing architectures. The main objective of this article is then to establish a precedent in the Planning and Reinforcement Learning community by proposing new approaches that redefine the RL action space in a manner that is more closely aligned with the Planning perspective.
Submission Number: 4
Loading