The (Un)Scalability of Informed Heuristic Function Estimation in NP-Hard Search Problems

Published: 19 Nov 2023, Last Modified: 19 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The A* algorithm is commonly used to solve NP-hard combinatorial optimization problems. When provided with a completely informed heuristic function, A* can solve such problems in time complexity that is polynomial in the solution cost and branching factor. In light of this fact, we examine a line of recent publications that propose fitting deep neural networks to the completely informed heuristic function. We assert that these works suffer from inherent scalability limitations since --- under the assumption of NP $\not \subseteq$ P/poly --- such approaches result in either (a) network sizes that scale super-polynomially in the instance sizes or (b) the accuracy of the fitted deep neural networks scales inversely with the instance sizes. Complementing our theoretical claims, we provide experimental results for three representative NP-hard search problems. The results suggest that fitting deep neural networks to informed heuristic functions requires network sizes that grow quickly with the problem instance size. We conclude by suggesting that the research community should focus on scalable methods for integrating heuristic search with machine learning, as opposed to methods relying on informed heuristic estimation.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Updated based on all reviewers' comments and action editor's comments. See the discussion for more details. This also includes fix to the openreview link.
Video: https://www.youtube.com/watch?v=jc4YN-Nt1RU
Code: https://github.com/Pi-Star-Lab/unscalable-heuristic-approximator
Supplementary Material: zip
Assigned Action Editor: ~Xi_Lin2
Submission Number: 1504
Loading