TL;DR: We prove that posterior distributions in stochastic localization iterations satisfy Poincaré inequalities when sampled using discrete MCMC samplers, enabling efficient sampling from binary quadratic distributions.
Abstract: Sampling from binary quadratic distributions (BQDs) is a fundamental but challenging problem in discrete optimization and probabilistic inference. Previous work established theoretical guarantees for stochastic localization (SL) in continuous domains, where MCMC methods efficiently estimate the required posterior expectations during SL iterations. However, achieving similar convergence guarantees for discrete MCMC samplers in posterior estimation presents unique theoretical challenges.
In this work, we present the first application of SL to general BQDs, proving that after a certain number of iterations, the external field of posterior distributions constructed by SL tends to infinity almost everywhere, hence satisfy Poincaré inequalities with probability near to 1, leading to polynomial-time mixing. This theoretical breakthrough enables efficient sampling from general BQDs, even those that may not originally possess fast mixing properties. Furthermore, our analysis, covering enormous discrete MCMC samplers based on Glauber dynamics and Metropolis-Hastings algorithms, demonstrates the broad applicability of our theoretical framework.
Experiments on instances with quadratic unconstrained binary objectives, including maximum independent set, maximum cut, and maximum clique problems, demonstrate consistent improvements in sampling efficiency across different discrete MCMC samplers.
Lay Summary: In many scientific fields, researchers need to generate random samples that follow specific probability patterns—like simulating the behavior of magnetic materials in physics or modeling complex networks in computer science. These patterns involve binary variables (things that can only be "on" or "off") with intricate dependencies between them, making direct sampling extremely challenging.
We developed a new sampling approach based on stochastic localization that makes this difficult task much easier. Instead of trying to directly sample from the complex target distribution, our method breaks the problem into a series of steps. Each step involves sampling from a simpler, "smoothed-out" version of the original distribution—like gradually removing the sharp peaks and valleys from a rugged landscape to make it easier to navigate.
We proved mathematically that after enough iterations, standard sampling algorithms can efficiently generate samples from these intermediate distributions, which ultimately give us samples from our original target. Our experiments on combinatorial optimization problems demonstrate that this approach consistently improves the performance of existing sampling methods, enabling scientists to study complex systems that were previously too difficult to simulate accurately.
Link To Code: https://github.com/LOGO-CUHKSZ/SLDMCMC
Primary Area: Probabilistic Methods->Monte Carlo and Sampling Methods
Keywords: Stochastic Localization, Discrete MCMC samplers, Binary Quadratic Distribution
Submission Number: 5706
Loading