Keywords: Infinite-Dimensional Optimization; Distributionally Robust Optimization
TL;DR: The minimax theoretical perspective for addressing distributionally robust optimization.
Abstract: We consider a minimax problem motivated by distributionally robust optimization (DRO) when the worst-case distribution is continuous, leading to significant computational challenges due to the infinite-dimensional nature of the optimization problem. Leveraging Brenier’s theorem, we represent the worst-case distribution as a transport map of a continuous reference measure and reformulate the regularized discrepancy-based DRO as a minimax problem in Wasserstein space. We further propose an algorithmic framework with global convergence guarantees and complexity bounds for obtaining approximate stationary points. Under this continuous formulation, the proposed algorithms overcome the scalability, generalization, and worst-case inference limitations of discrete DRO approaches. Numerical results with neural network-based transport maps demonstrate that the proposed method enables both stable training of robust classifiers and effective worst-case inference for classification tasks.
Submission Number: 200
Loading