Robust and Scalable SDE Learning: A Functional PerspectiveDownload PDF

29 Sept 2021, 00:34 (edited 15 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: SDE Learning, Parallelization, Importance Sampling
  • Abstract: Stochastic differential equations provide a rich class of flexible generative models, capable of describing a wide range of spatio-temporal processes. A host of recent work looks to learn data-representing SDEs, using neural networks and other flexible function approximators. Despite these advances, learning remains computationally expensive due to the sequential nature of SDE integrators. In this work, we propose an importance-sampling estimator for probabilities of observations of SDEs for the purposes of learning. Crucially, the approach we suggest does not rely on such integrators. The proposed method produces lower-variance gradient estimates compared to algorithms based on SDE integrators and has the added advantage of being embarrassingly parallelizable. This facilitates the effective use of large-scale parallel hardware for massive decreases in computation time.
  • One-sentence Summary: We provide an algorithm for estimating the probability of observations of a stochastic process which is significantly faster and more stable than those based on standard integration schemes
  • Supplementary Material: zip
12 Replies