Keywords: contextual stochastic programming, decision-focused learning, differentiable optimization, log-barrier methods; scenario generation
TL;DR: We apply decision-focused learning to train a neural network that maps contexts to scenarios in a contextual two-stage stochastic linear program, using log-barrier regularization to enable efficient gradient-based training.
Abstract: We introduce a decision-focused scenario generation framework for contextual
two-stage stochastic linear programs that bypasses explicit conditional
distribution modeling.
A neural generator maps a context $x$ to a fixed-size set of scenarios
$\{\xi_s(x)\}_{s=1}^S$.
For each generated collection we compute a first-stage decision by solving a
single log-barrier regularized deterministic equivalent whose KKT system yields
closed-form, efficiently computable derivatives via implicit differentiation.
The network is trained end-to-end to minimize the true (unregularized)
downstream cost evaluated on observed data, avoiding auxiliary value-function
surrogates, bi-level heuristics, or differentiation through generic LP solvers.
Unlike single-scenario methods, our approach natively learns multi-scenario
representations; unlike distribution-learning pipelines, it scales without
requiring density estimation in high dimension.
We detail the barrier formulation, the analytic gradient structure with respect
to second-stage data, and the resulting computational trade-offs.
Preliminary experiments on contextual synthetic instances illustrate that the
method can rival current state-of-the-art methods, even when trained on
small amounts of training data.
Submission Number: 232
Loading