Gaussian measures conditioned on nonlinear observations: consistency, MAP estimators, and simulation

Published: 2025, Last Modified: 06 Nov 2025Stat. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The article presents a systematic study of the problem of conditioning a Gaussian random variable \(\xi \) on nonlinear observations of the form \(F \circ {\varvec{\phi }}(\xi )\) where \({\varvec{\phi }}: \mathcal {X}\rightarrow \mathbb {R}^N\) is a bounded linear operator and F is nonlinear. Such problems arise in the context of Bayesian inference and recent machine learning-inspired PDE solvers. We give a representer theorem for the conditioned random variable \(\xi \mid F\circ {\varvec{\phi }}(\xi )\), stating that it decomposes as the sum of an infinite-dimensional Gaussian (which is identified analytically) as well as a finite-dimensional non-Gaussian measure. We also introduce a novel notion of the mode of a conditional measure by taking the limit of the natural relaxation of the problem, to which we can apply the existing notion of maximum a posteriori estimators of posterior measures. Finally, we introduce a variant of the Laplace approximation for the efficient simulation of the aforementioned conditioned Gaussian random variables towards uncertainty quantification.
Loading