Adaptive Stepsizing for Stochastic Gradient Langevin Dynamics in Bayesian Neural Networks

Published: 14 Feb 2026, Last Modified: 14 Feb 2026MATH4AI @ AAAI 2026 OralEveryoneRevisionsCC BY 4.0
Keywords: Bayesian Neural Networks, MCMC, Langevin Dynamics
TL;DR: An adaptive stepsize algorithm for Stochastic Gradient Langevin Dynamics based on local geometry to sample from Bayesian Neural Networks
Abstract: Bayesian neural networks (BNNs) require scalable sampling algorithms to approximate posterior distributions over parameters. Existing stochastic gradient Markov Chain Monte Carlo (SGMCMC) methods are highly sensitive to the choice of stepsize and adaptive variants such as pSGLD typically fail to sample the correct invariant measure without addition of a costly divergence correction term. In this work, we build on the recently proposed `SamAdams' framework for timestep adaptation (Leimkuhler, Lohmann, and Whalley 2025), introducing an adaptive scheme: SA-SGLD, which employs time rescaling to modulate the stepsize according to a monitored quantity (typically the local gradient norm). SA-SGLD can automatically shrink stepsizes in regions of high curvature and expand them in flatter regions, improving both stability and mixing without introducing bias. We show that our method can achieve more accurate posterior sampling than SGLD on high-curvature 2D toy examples and in image classification with BNNs using sharp priors.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 18
Loading