Mixtures of Locally Bounded Langevin dynamics for Bayesian Model Averaging

TMLR Paper5860 Authors

10 Sept 2025 (modified: 16 Sept 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Properties of probability distributions change when going from low to high dimensions, to the extent that they admit counterintuitive behavior. Gaussian distributions intuitively illustrate a well-known effect of moving to higher dimensions, namely that the typical set almost surely does not contain the mean, which is the distribution’s most probable point. This can be problematic in Bayesian Deep Learning, as the samples drawn from the high- dimensional posterior distribution are often used as Monte Carlo samples to estimate the integral of the predictive distribution. Here, the predictive distribution will reflect the behavior of the samples and, therefore, of the typical set. For instance, we cannot expect to sample networks close to the maximum a posteriori estimate after fitting a Gaussian approximation to the posterior using the Laplace method. In this paper, we introduce a method that aims to mitigate this typicality problem in high dimensions by sampling from the posterior with Langevin dynamics on a restricted support enforced by a reflective boundary condition. We demonstrate how this leads to improved posterior estimates by illustrating its capacity for fine-grained out-of-distribution (OOD) ranking on the Morpho- MNIST dataset.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Michele_Caprio1
Submission Number: 5860
Loading