Efficient Gradient Flows in Sliced-Wasserstein Space

Published: 14 Nov 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Minimizing functionals in the space of probability distributions can be done with Wasser- stein gradient flows. To solve them numerically, a possible approach is to rely on the Jordan–Kinderlehrer–Otto (JKO) scheme which is analogous to the proximal scheme in Euclidean spaces. However, it requires solving a nested optimization problem at each it- eration, and is known for its computational challenges, especially in high dimension. To alleviate it, very recent works propose to approximate the JKO scheme leveraging Brenier’s theorem, and using gradients of Input Convex Neural Networks to parameterize the density (JKO-ICNN). However, this method comes with a high computational cost and stability is- sues. Instead, this work proposes to use gradient flows in the space of probability measures endowed with the sliced-Wasserstein (SW) distance. We argue that this method is more flex- ible than JKO-ICNN, since SW enjoys a closed-form differentiable approximation. Thus, the density at each step can be parameterized by any generative model which alleviates the computational burden and makes it tractable in higher dimensions.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We changed the contribution section and added a discussion at the end of Section 3 about the minimization approach without regularization.
Code: https://github.com/clbonet/Sliced-Wasserstein_Gradient_Flows
Assigned Action Editor: ~Arnaud_Doucet2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 285
Loading