Keywords: ML and Optimization; Combinatorial algorithms; Decision-focused learning; Implicit differentiation by perturbation; Regularization
TL;DR: This article introduces cost regularization to address an issue in decision-focused learning found by establishing a link between perturbation-based approaches and the notion of solution stability related to combinatorial optimization literature.
Abstract: Decision-focused learning is an emerging paradigm that integrates predictive modeling and combinatorial optimization by training models to directly improve decision quality rather than prediction accuracy alone. Differentiating through combinatorial optimization problems represents a central challenge, and recent approaches tackle this difficulty by introducing perturbation-based approximations that enable end-to-end training. In this work, we focus on estimating the objective function coefficients of a combinatorial optimization problem. We analyze how the effectiveness of perturbation-based techniques depends on the intensity of the perturbations, by establishing a theoretical link to the notion of solution stability in combinatorial optimization. Our study demonstrates that fluctuations in perturbation intensity and solution stability can lead to ineffective training. We propose to address this issue by introducing a regularization of the estimated cost vectors which improves the robustness and reliability of the learning process. Extensive experiments on established benchmarks show that this regularization consistently improves performances, confirming its practical benefit and general applicability.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 5865
Loading