Making the Subspace Assumption Work: Low-Dimensional Exploration for High-Dimensional Bayesian Optimization
Keywords: High-Dimensional Optimization, Bayesian Optimization, Black-box Optimization, Hyperparameter Tuning
TL;DR: A state-of-the-art Bayesian optimization method that adapts to high-dimensional problems.
Abstract: Bayesian optimization (BO) has been widely used from algorithm hyperparameter tuning to emerging scientific applications. However, its performance degradation in high-dimensional settings remains a long-standing bottleneck. Recent studies suggest that standard BO can remain competitive in high dimensions by carefully tuning priors or initialization, which has shifted attention away from subspace-based methods. We argue that the limitations of existing subspace methods stem not from the subspace assumption itself, but from the lack of an effective balance between thoroughly exploiting the current subspace and expanding to larger ones. To address this, we propose a high-dimensional Bayesian optimization algorithm, which projects the input-space into a lower dimensional subspace and consequently expands the subspace dimension based on cumulative regret minimization. Our method allocates evaluation budgets linearly according to the subspace dimension, thereby fully exploiting structural information before expansion. Our experimental evaluations show that our method significantly outperforms existing state-of-the-art baselines on several challenging high-dimensional synthetic and real-world tasks, highlighting the continued potential of subspace methods in high-dimensional Bayesian optimization.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 19322
Loading