## Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions

21 May 2021, 20:48 (modified: 21 Jan 2022, 18:17)NeurIPS 2021 OralReaders: Everyone
Keywords: sampling, computational statistics, Bayesian methods, Langevin dynamics, Hamiltonian Monte Carlo
TL;DR: We give lower bounds showing the current analyses of MALA for sampling well-conditioned distributions are nearly-tight, and that HMC incurs a polynomial dimension dependence for any number of steps.
Abstract: We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of $\widetilde{\Omega}(\kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results \cite{DwivediCW018, ChenDWY19, LeeST20a} up to logarithmic factors and answering an open question of \cite{ChewiLACGR20}. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies