We Still Don’t Understand High-Dimensional Bayesian Optimization

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: High-dimensional spaces have historically challenged Bayesian optimization (BO). Existing methods aim to overcome this curse of dimensionality by carefully encoding structural assumptions, from locality to sparsity to smoothness, into the optimization procedure. Surprisingly, we demonstrate that these approaches are outperformed by arguably the simplest method imaginable: Bayesian linear regression. After applying a geometric transformation to avoid boundary-seeking behaviour, Gaussian processes with linear kernels yield state-of-the-art performance on tasks with 60- to 6,000-dimensional search spaces. Linear models offer numerous advantages over their non-parametric counterparts: they afford closed-form acquisition function optimization, they yield asymptotically lower regret, and their computation scales linearly with data, a fact we exploit on molecular optimization tasks with >20,000 observations. Coupled with empirical and theoretical analyses, our results suggest the need to depart from past intuitions about BO methods in high-dimensional spaces.
Submission Number: 1180
Loading