Keywords: Bayesian, coresets, MCMC, stochastic gradient optimization, learning-rate-free methods
TL;DR: We develop a novel learning-rate-free stochastic optimization scheme for use in Bayesian coreset construction via Coreset MCMC.
Abstract: A Bayesian coreset is a small, weighted subset of a data set that replaces the full data during inference to reduce computational cost. The state-of-the-art coreset construction algorithm, Coreset Markov chain Monte Carlo (Coreset MCMC), uses draws from an adaptive Markov chain targeting the coreset posterior to train the coreset weights via stochastic gradient optimization. However, the quality of the constructed coreset, and thus the quality of its posterior approximation, is sensitive to the stochastic optimization learning rate. In this work, we propose a learning-rate-free stochastic gradient optimization procedure, Hot-start Distance over Gradient (Hot DoG), for training coreset weights in Coreset MCMC without user tuning effort. We provide a theoretical analysis of the convergence of the coreset weights produced by Hot DoG. We also provide empirical results demonstrate that Hot DoG provides higher quality posterior approximations than other learning-rate-free stochastic gradient methods, and performs competitively to optimally-tuned ADAM.
Supplementary Material: zip
Latex Source Code: zip
Code Link: https://github.com/NaitongChen/automated-coreset-mcmc-experiments
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission387/Authors, auai.org/UAI/2025/Conference/Submission387/Reproducibility_Reviewers
Submission Number: 387
Loading