Abstract: Sampling from log-concave distributions is a central problem in statistics and machine learning.
Prior work establishes theoretical guarantees for Langevin Monte Carlo based on overdamped and underdamped Langevin dynamics and, more recently, third-order variants.
In this paper, we introduce a new sampling algorithm built on a general $K$th-order Langevin dynamics, extending beyond second- and third-order methods.
To discretize the $K$th-order dynamics, we approximate the drift induced by the potential via Lagrange interpolation and refine the node values at the interpolation points using Picard-iteration corrections, yielding a flexible scheme that fully utilizes the acceleration of higher-order Langevin dynamics.
For targets with smooth, strongly log-concave densities, we prove dimension-dependent convergence in Wasserstein distance: the sampler achieves $\varepsilon$-accuracy within $\widetilde O(d^{\frac{K-1}{2K-3}}\,\varepsilon^{-\frac{2}{2K-3}})$ gradient evaluations for $K \ge 3$. To the best of our knowledge, this is the first result establishing this form of query complexity for a general $K$th-order Langevin-based sampler. In particular, the dependence
on the accuracy parameter $\varepsilon$ improves as the order $K$
increases, yielding better $\varepsilon$-rates than existing first-,
second-, and third-order approaches.
Loading