Exploring and Exploiting Model Uncertainty in Bayesian Optimization

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gaussian Process, Bayesian Optimization, Uncertainty Quantification
TL;DR: We propose the infinite-Gaussian process model, enabling efficient Bayesian optimization under complex, non-stationary, and heavy-tailed reward landscapes.
Abstract: In this work, we consider the problem of Bayesian Optimization (BO) under reward model uncertainty—that is, when the underlying distribution type of the reward is unknown and potentially intractable to specify. This challenge is particularly evident in many modern applications, where the reward distribution is highly ill-behaved, often non-stationary, multi-modal, or heavy-tailed. In such settings, classical Gaussian Process (GP)-based BO methods often fail due to their strong modeling assumptions. To address this challenge, we propose a novel surrogate model, the infinity-Gaussian Process ($\infty$-GP), which represents a sequential spatial Dirichlet Process mixture with a GP baseline. The $\infty$-GP quantifies both value uncertainty and model uncertainty, enabling more flexible modeling of complex reward structures. Combined with Thompson Sampling, the $\infty$-GP facilitates principled exploration and exploitation in the distributional space of reward models. Theoretically, we prove that the $\infty$-GP surrogate model can approximate a broad class of reward distributions by effectively exploring the distribution space, achieving near-minimax-optimal posterior contraction rates. Empirically, our method outperforms state-of-the-art approaches in various challenging scenarios, including highly non-stationary and heavy-tailed reward settings where classical GP-based BO often fails.
Primary Area: Probabilistic methods (e.g., variational inference, causal inference, Gaussian processes)
Submission Number: 1991
Loading