The (Un)Scalability of Heuristic Approximators for NP-Hard Search ProblemsDownload PDF

Published: 06 Dec 2022, Last Modified: 10 Nov 2024ICBINB posterReaders: Everyone
Keywords: Heuristic Search, Machine Learning and Search, Combinatorial Optimization
TL;DR: Our paper provides theoretical proof that infering from a accurate enough neural network based heuristic function approximator is NP-hard, and our claim is supported by empirical results in three domains.
Abstract: The A* algorithm is commonly used to solve \cNP-hard combinatorial optimization problems. When provided with a completely informed heuristic function, A* solves many \cNP-hard minimum-cost path problems in time polynomial in the branching factor and the number of edges in a minimum-cost path. Thus, approximating their completely informed heuristic functions with high precision is \cNP-hard. We, therefore, examine recent publications that propose the use of neural networks for this purpose. We support our claim that these approaches do not scale to large instance sizes both theoretically and experimentally. Our first experimental results for three representative \cNP-hard minimum-cost path problems suggest that using neural networks to approximate completely informed heuristic functions with high precision might result in network sizes that scale exponentially in the instance sizes. The research community might thus benefit from investigating other ways of integrating heuristic search with machine learning.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/the-scalability-of-heuristic-approximators/code)
0 Replies

Loading