Abstract: We present a meta-algorithm for learning an approximate posterior-inference algorithm for
low-level probabilistic programs that terminate. Our meta-algorithm takes a training set
of probabilistic programs that describe models with observations, and attempts to learn
an efficient method for inferring the posterior of a similar program. A key feature of our
approach is the use of what we call a white-box inference algorithm that extracts information
directly from model descriptions themselves, given as programs. Concretely, our white-box
inference algorithm is equipped with multiple neural networks, one for each type of atomic
command, and computes an approximate posterior of a given probabilistic program by
analysing individual atomic commands in the program using these networks. The parameters
of the networks are learnt from a training set of programs by our meta-algorithm. We
empirically demonstrate that the learnt inference algorithm generalises well to programs
that are new in terms of both parameters and model structures, and report important use
cases where our approach, in combination with importance sampling (IS), achieves greater
test-time efficiency than alternatives such as HMC. The overall results show the promise as
well as remaining challenges.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Swarat_Chaudhuri1
Submission Number: 1196
Loading