Scalable Inference in SDEs by Direct Matching of the Fokker–Planck–Kolmogorov EquationDownload PDF

May 21, 2021 (edited Oct 22, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Stochastic differential equation, approximative inference, neural SDE, Gaussian process
  • TL;DR: Gaussian approximations are a fast and more scalable option to stochastic Runge–Kutta methods for SDEs in ML
  • Abstract: Simulation-based techniques such as variants of stochastic Runge–Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning. These methods are general-purpose and used with parametric and non-parametric models, and neural SDEs. Stochastic Runge–Kutta relies on the use of sampling schemes that can be inefficient in high dimensions. We address this issue by revisiting the classical SDE literature and derive direct approximations to the (typically intractable) Fokker–Planck–Kolmogorov equation by matching moments. We show how this workflow is fast, scales to high-dimensional latent spaces, and is applicable to scarce-data applications, where a non-parametric SDE with a driving Gaussian process velocity field specifies the model.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: http://github.com/AaltoML/scalable-inference-in-SDEs
15 Replies

Loading