Distributionally Robust Bayesian Optimization: From Single to Multiple Objectives

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Distributionally Robust Optimization, Multi-Objective Optimization, Bayesian Optimization
Abstract: In many real-world applications, systems are typically expensive to evaluate and influenced by contextual variables whose distributions may shift between training and deployment. While robust Bayesian optimization methods have been proposed for black-box functions under such conditions, most of them focus solely on single-objective settings. In practice, however, systems often need to be optimized across multiple criteria simultaneously, which is challenging since the same environment may affect different objectives in distinct ways. Although robustness against the contextual uncertainty has been investigated for single-objective problems, its extension to multi-objective optimization (MOO) problems remains limited, with existing works primarily addressing only input noise—a special case of the contextual uncertainty. To bridge this gap, in this work, we propose the first Multi-objective Bayesian Optimization (MOBO) method for the general $\varphi$-divergence Distributionally Robust Optimization (DRO) problem with shared contexts, aiming to obtain *robust efficient* solutions. Furthermore, a provable regret bound is provided, which is the first sublinear regret bound without requiring a decreasing radius of the DRO uncertainty set, even in comparison to existing works in the single-objective setting. Moreover, we provide numerical experiments to validate our theory and the empirical effectiveness of our proposed algorithms.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 9810
Loading