Keywords: Neuro-Symbolic, Text-Based Game, Model-based Reinforcement Learning
TL;DR: We describe a framework using semantic parsing and neural Inductive Logic Programming for text based games with a focus on learning logical world models.
Abstract: Text-based games serve as important benchmarks for agents with natural language capabilities. To enable such agents, we are interested in the problem of learning useful world models. Our assumption is that such a world model is best represented by a logical form which underlies the structure of these games. We propose to tackle this problem by leveraging the expressivity of recent neuro-symbolic architectures, specifically the Logical Neural Networks (LNN). Here, we describe a method that can learn neuro-symbolic world models on the TextWorld-Commonsense set of games. We then show that planning on this learned world model results in optimal actions in the game world.
Archival: Non-Archival
1 Reply
Loading