Accounting for Heterogeneous Parameters in Decision-Focused Learning

02 Mar 2026 (modified: 21 Mar 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Decision-focused learning (DFL) is a recent machine learning paradigm aimed at tackling predict-then-optimize problems, where the task is to predict the parameter values of a parametric optimization problem from features. Instead of maximizing predictive accuracy, DFL maximizes downstream decision quality, by training the model to avoid specifically those errors that most negatively impact decision-making. In this work, we systematically investigate an understudied aspect of DFL: that for different parameters, the way prediction errors affect downstream decision quality may differ, and that the conventional model architecture cannot account for these differences. We first formalize this issue and provide a theoretical characterization of when it arises. We then show that significantly better decision quality can often be achieved by equipping models with the ability to learn parameter-specific predictive mappings – even when the true underlying mappings are identical. To this end, we investigate three architectural alterations to the predictive model, and propose a data augmentation scheme to enhance data efficiency. We extensively evaluate the impact of these changes across several dimensions, using linear and nonlinear predictive models, and optimization problems of different complexities. Our findings show that significant performance gains can be realized through such architectural alterations and data augmentation scheme, across different problem types.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Jasper_C.H._Lee1
Submission Number: 7729
Loading