Abstract: Neural combinatorial optimization (NCO) approaches rely on neural prediction, whose reliability is uncertain, for decision making. We identify two crucial errors in the inference stage: stepwise optimal error (SOE) and cumulative optimal error (COE), where SOE is caused by sub-optimal stepwise decision and accumulates to COE. In this paper, we present the formal definitions of SOE and COE, and we demonstrate that inaccurate neural predictions exist even in state-of-the-art NCO models, which influence inference strategies to make sub-optimal decisions, thus increasing COEs of the generated solutions. As an approximation method, the NCO model is essentially difficult to fully eliminate these inaccuracies through its learning process. Therefore, we resort to enhancing the inference strategy and propose a novel comparison-based beam search (CBS) strategy that utilizes a comparison operation to safely prune partial solutions with higher COEs. CBS regards neural prediction as a soft criterion for selecting candidate decisions, thus expanding the search tree dynamically. Furthermore, CBS incorporates POMO [16] for starting node selection. Experimental results demonstrate its generality across various NCO models, as well as its superiority, outperforming the well-known benchmark LKH3 solver with gaps of 0.11% vs.0.43% on TSPLIB dataset, and 3.31% vs.4.04% on CVRPLIB dataset.
External IDs:dblp:conf/iconip/ChenTZHL25
Loading