Specifying Goals to Deep Neural Networks with Answer Set Programming

Published: 12 Feb 2024, Last Modified: 06 Mar 2024ICAPS 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Learning, answer set programming
TL;DR: Train DNN heuristic to estimate distance between state and set of goal states, use ASP to specify goals
Abstract: Recently, methods such as DeepCubeA have used deep reinforcement learning to learn domain-specific heuristic functions in a largely domain-independent fashion. However, such methods either assume a predetermined goal or assume that goals will be given as fully-specified states. Therefore, specifying a set of goal states is not possible for learned heuristic functions while, on the other hand, the Planning Domain Definition Language (PDDL) allows for the specification of goal states using ground atoms in first-order logic. To address this issue, we introduce a method of training a heuristic function that estimates the distance between a given state and a set of goal states represented as a set of ground atoms in first-order logic. Furthermore, to allow for more expressive goal specification, we introduce techniques for specifying goals as answer set programs and using answer set solvers to discover sets of ground atoms that meet the specified goals. In our experiments with the Rubik's cube, sliding tile puzzles, and Sokoban, we show that we can specify and reach different goals without any need to re-train the heuristic function.
Primary Keywords: Learning
Category: Long
Student: No
Supplemtary Material: zip
Submission Number: 100
Loading