Approximate inference by broadening the support of the likelihoodDownload PDF

Published: 20 Jun 2023, Last Modified: 18 Jul 2023AABI 2023Readers: Everyone
Keywords: approximate Bayesian inference, variational inference, model misspecification, generalized linear models, categorical data, truncated Gaussians, truncated normals
TL;DR: We present a framework for approximate inference on a target observation model via inference on an observation model with broader support.
Abstract: Here we present a framework for approximate statistical inference on a target observation model $F$ via inference on an observation model $H$ with broader support which gives relatively easy and efficient inference. For example, inference is typically easier to derive and implement, and quicker to compute, for an independent binary model than a categorical model, or for an unconstrained model than a model truncated to some possibly exotic region. If the pair $(F, H)$ is chosen such that the likelihood of $F$ dominates that of $H$, then our framework gives a simple recipe for approximate inference. In the frequentist paradigm, we can substitute the maximum likelihood parameters for $H$ into $F$. In the Bayesian paradigm, we can use the posterior under likelihood $H$ as an approximate posterior under likelihood $F$. We show that this dominated likelihood approximation provably minimizes an upper bound on an error term between the true data generating distribution and the now tractable model. Experiments on real datasets fitting a Gaussian mixture model truncated to a union of rectangular regions and fitting a categorical Generalized Linear Model (GLM) via an independent binary approximation demonstrate the utility of our approach.
0 Replies

Loading