Bootstrap Your Uncertainty: Adaptive Robust Classification Driven by Optimal-Transport

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: distributionally robust optimization, optimal transport
Abstract: Deep learning models often struggle with distribution shifts between training and deployment environments. Distributionally Robust Optimization (DRO) offers a promising framework by optimizing worst-case performance over a set of candidate distributions, which is called as the \emph{uncertainty set}. However, the efficacy of DRO heavily depends on the design of uncertainty set, and existing methods often perform suboptimally due to inappropriate and inflexible uncertainty sets. In this work, we first propose a novel perspective that casts entropy-regularized Wasserstein DRO as a dynamic process of distributional exploration and semantic alignment, both driven by optimal transport (OT). This unified viewpoint yields two key new techniques: \emph{semantic calibration}, which bootstraps semantically meaningful transport costs via inverse OT, and \emph{adaptive refinement}, which adjusts uncertainty set using OT-driven feedback. Together, these components form an exploration-and-feedback system, where the transport costs and uncertainty set evolve jointly during training, enabling the model to better adapt to potential distribution shifts. Moreover, we provide an in-depth analysis on this adaptive process and prove the theoretical convergence guarantee. Finally, we present our experimental results across diverse distribution shift scenarios, which demonstrate that our approach significantly outperforms existing methods, achieving state-of-the-art robustness.
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 20880
Loading