Bayesian Model Selection via a Data-Emphasized Variational Objective

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: By adjusting a term in the evidence lower bound, we enable gradient-based learning of regularization hyperparameters on the full training set, reducing the 88+ hour grid search of past work to under 3 hours.
Abstract: When training large models on limited data, avoiding overfitting is paramount. Common grid search or smarter search methods rely on expensive separate runs at each candidate hyperparameter while carving out a validation set that reduces available training data. In this paper, we study gradient-based learning of hyperparameters on all training data via the evidence lower bound (ELBO) objective from Bayesian variational methods. We focus on scenarios where the model is over-parameterized for flexibility while the approximate posterior is chosen to be Gaussian with isotropic covariance for tractability, even though it cannot match the true posterior. In such scenarios, we find the ELBO prioritizes posteriors that match the prior, leading to severely underfitting the data. Instead, we recommend a data-emphasized ELBO that upweights the likelihood over the prior. In Bayesian transfer learning of image and text classifiers, our method reduces 88+ hour grid searches of past work to under 3 hours while delivering comparable accuracy. We further demonstrate how our approach enables efficient yet accurate approximations of Gaussian processes with learnable length-scale kernels.
Submission Number: 935
Loading