Keywords: generative, GAN, uncertainty quantification, experiment design, inverse problems, adaptive, sampling, bayesian, Wasserstein GAN
TL;DR: We show that generative adversarial networks can solve inverse problems in a Bayesian fashion, naturally allowing for adaptive sampling.
Abstract: We propose an adaptive sampling method for the linear model, driven by the uncertainty estimation with a generative adversarial network (GAN) model. Specifically, given a forward observation model that provides partial measurements $\vy$ about an unknown parameter $\x$, we show how to build a GAN model to estimate its posterior $p(\x|\y)$. We then leverage our approximate posterior to perform sequential adaptive sampling by actively selecting the measurement with the current maximal uncertainty. We empirically demonstrate that our posterior estimate contracts rapidly towards the correct mode, while outperforming the state-of-the-art approaches even for other criteria for which they are specifically trained, such as PSNR or SSIM.
Conference Poster: pdf