Estimating Expectations without Sampling: Neural Stein Estimation

Published: 27 May 2024, Last Modified: 13 Jul 2024AABI 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inference, Control Variates, Expectations, Monte Carlo, Stein's Method
TL;DR: A method that estimates expectations by training a control variate with an alternative loss function, which works better when not given target samples as data.
Abstract: We propose a method for estimating the expected value of a given function $h(x)$, under an intractable distribution $p(x)$ whose score function $\nabla \log p(x)$ is however available, without sampling from it. Monte Carlo based sampling methods, in particular Markov Chain Monte Carlo (MCMC) methods when an exact sampler is not available, are a popular tool for this task. However, they may be difficult to diagnose and suffer from noisy estimates, while potentially being very expensive and biased, in the MCMC case. Our proposed method, Neural Stein Estimation (NSE), avoids these issues and instead frames calculating the expectation as solving a differential equation inspired by Stein's method and control variates. The algorithm consists of solving this differential equation through optimization, using a neural network. This means the method is deterministic and converges in a stable way, transforming the issue of sampling at run-time to the issue of sampling at training-time and the amortized cost of training this neural network. This work presents the theoretical foundations of NSE, and evaluates the method's viability over control variate baselines on simple distributions.
Submission Number: 25
Loading