**Code Of Ethics:**I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.

**Keywords:**Logconcave sampling, Dikin walk, Markov chain Monte Carlo, Interior point methods

**Submission Guidelines:**I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.

**Abstract:**We consider the problem of sampling from a logconcave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a polytope $K:=${$\theta \in \mathbb{R}^d: A\theta \leq b$}, where $A\in \mathbb{R}^{m\times d}$ and $b \in \mathbb{R}^m$. The fastest-known algorithm for the setting when $f$ is $O(1)$-Lipschitz or $O(1)$-smooth runs in roughly $O(md \times md^{\omega -1})$ arithmetic operations, where the $md^{\omega -1}$ term arises because each Markov chain step requires computing a matrix inversion and determinant ($\omega \approx 2.37$ is the matrix multiplication constant). We present a nearly-optimal implementation of this Markov chain with per-step complexity that is roughly the number of non-zero entries of $A$ while the number of Markov chain steps remains the same. The key technical ingredients are 1) to show that the matrices that arise in this Dikin walk change slowly, 2) to deploy efficient linear solvers which can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and 3) to speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator. This result directly improves the runtime for applications that involve sampling from Gibbs distributions constrained to polytopes that arise in Bayesian statistics and private optimization.

**Anonymous Url:**I certify that there is no URL (e.g., github page) that could be used to find authors' identity.

**No Acknowledgement Section:**I certify that there is no acknowledgement section in this submission for double blind review.

**Primary Area:**probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)

**Submission Number:**6187

Loading