Stochastic approximation with biased mcmc for expectation maximization

Published: 25 May 2023, Last Modified: 26 Jan 2026AISTATS 2023EveryoneRevisionsCC BY 4.0
Abstract: The expectation maximization (EM) algo- rithm is a widespread method for empiri- cal Bayesian inference, but its expectation step (E-step) is often intractable. Employ- ing a stochastic approximation scheme with Markov chain Monte Carlo (MCMC) can cir- cumvent this issue, resulting in an algorithm known as MCMC-SAEM. While theoretical guarantees for MCMC-SAEM have previ- ously been established, these results are re- stricted to the case where asymptotically un- biased MCMC algorithms are used. In prac- tice, MCMC-SAEM is often run with asymp- totically biased MCMC, for which the conse- quences are theoretically less understood. In this work, we fill this gap by analyzing the asymptotics and non-asymptotics of SAEM with biased MCMC steps, particularly the effect of bias. We also provide numeri- cal experiments comparing the Metropolis- adjustedLangevinalgorithm(MALA),which is asymptotically unbiased, and the unad- justed Langevin algorithm (ULA), which is asymptotically biased, on synthetic and real datasets. Experimental results show that ULA is more stable with respect to the choice of Langevin stepsize and can sometimes re- sult in faster convergence.
Loading