Globally Convergent Variational Inference

Published: 27 May 2024, Last Modified: 12 Jul 2024AABI 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: convex optimization; neural tangent kernel; forward KL divergence
TL;DR: We utilize neural network asymptotics and convexity of the forward KL divergence to show that an amortized inference method minimizing this objective converges asymptotically to a unique global optimum in function space.
Abstract: In variational inference (VI), an approximation of the posterior distribution is selected from a family of distributions through numerical optimization. With the most common variational objective function, known as the evidence lower bound (ELBO), only convergence to a *local* optimum can be guaranteed. In this work, we instead establish the *global* convergence of a particular VI method. This VI method, which may be considered an instance of neural posterior estimation (NPE), minimizes an expectation of the inclusive (forward) KL divergence to fit a variational distribution that is parameterized by a neural network. Our convergence result relies on the neural tangent kernel (NTK) to characterize the gradient dynamics that arise from considering the variational objective in function space. In the asymptotic regime of a fixed, positive-definite neural tangent kernel, we establish conditions under which the variational objective admits a unique solution in a reproducing kernel Hilbert space (RKHS). Then, we show that the gradient descent dynamics in function space converge to this unique function. We empirically demonstrate that our theoretical results explain the good performance of NPE in non-asymptotic finite-neuron settings.
Submission Number: 20
Loading