Learning Neuro-Symbolic World Models with Logical Neural Networks
Keywords: Model learning for planning
TL;DR: This paper is about learning planning models with neural networks that are purpose-built to handle logical models (Logical Neural Networks).
Abstract: Model-based reinforcement learning has shown great results when using deep neural networks for learning world models. However, these results are not directly applicable to many real-world problems that require explainable models and where training data is limited. A more suitable problem setting that can address these issues is relational model-based reinforcement learning where a logical world model is learned. In this setting, we propose to use Logical Neural Networks (LNN) which enable the scalable learning of logical rules. Our method builds around the LNN by creating a framework for learning lifted logical operator models. This is used together with object-centric perception modules and AI planners that reason about the learned logical world model. We first test our agent by comparing the LNN-learned models against the existing handcrafted models which are available in the PDDLGym environments. For these tests, we show that our agent performs optimally and is on-par with planning on expert-crafted models. We then further test our agent in a text-based game domain called TextWorld-Commonsense where expert-crafted models are not available. For this domain, deep reinforcement learning agents are the state-of-the-art and we showed that we significantly outperform all of the existing agents.
Submission Number: 25