Optimize Planning Heuristics to Rank, not to Estimate Cost-to-Goal

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Learning heuristic functions, deep learning, Immitation learning, planning, A*, best first search
TL;DR: We propose a theoretically sound loss function designed to minimize number of expanded states in best-first search with merit function. The advantage is demostrated on extensive experimental comparison to SOTA.
Abstract: In imitation learning for planning, parameters of heuristic functions are optimized against a set of solved problem instances. This work revisits the necessary and sufficient conditions of strictly optimally efficient heuristics for forward search algorithms, mainly A* and greedy best-first search, which expand only states on the returned optimal path. It then proposes a family of loss functions based on ranking tailored for a given variant of the forward search algorithm. Furthermore, from a learning theory point of view, it discusses why optimizing cost-to-goal h* is unnecessarily difficult. The experimental comparison on a diverse set of problems unequivocally supports the derived theory.
Supplementary Material: pdf
Submission Number: 4434
Loading