Keywords: Contextual Stochastic Optimization, Decision rule, Suboptimality gap
Abstract: Contextual stochastic optimization is a powerful paradigm for data-driven decision-making under uncertainty, where decisions are tailored to contextual information revealed prior to decision making. Recent advances leverage neural networks to learn expressive decision rules mapping contexts to decisions. However, enforcing feasibility under complex constraints remains a core challenge, as neural architectures do not inherently satisfy constraints in general.
Existing approaches often incur significant computational overhead and lack theoretical guarantees.
In this paper, we propose a principled framework for training neural decision rules under general constraints via a single-loop algorithm that solves the augmented Lagrangian minimax problem.
Our method eliminates the need for iterative projection layers or nested optimization loops, and provides provable guarantees on generalization, constraint violation, and suboptimality.
Empirical results on constrained contextual decision tasks demonstrate that our approach outperforms state-of-the-art baselines in both efficiency and solution quality.
Submission Number: 42
Loading