Neural Approximate Sufficient Statistics for Implicit ModelsDownload PDF

28 Sep 2020 (modified: 25 Jan 2021)ICLR 2021 SpotlightReaders: Everyone
  • Keywords: likelihood-free inference, bayesian inference, mutual information, representation learning
  • Abstract: We consider the fundamental problem of how to automatically construct summary statistics for likelihood-free inference where the evaluation of likelihood function is intractable but sampling / simulating data from the model is possible. The idea is to frame the task of constructing sufficient statistics as learning mutual information maximizing representation of the data. This representation is computed by a deep neural network trained by a joint statistic-posterior learning strategy. We apply our approach to both traditional approximate Bayesian computation and recent neural-likelihood methods, boosting their performance on a wide range of tasks.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • One-sentence Summary: We learn low-dimensional near-sufficient statistics by infomax principle to improve likelihood-free inference methods.
  • Supplementary Material: zip
17 Replies

Loading