Approximation, Kernelization, and Entropy-Dissipation of Gradient Flows: from Wasserstein to Fisher-Rao

Published: 03 Oct 2024, Last Modified: 03 Oct 2024OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: Motivated by various machine learning applications, we present a principled investigation of gradient flow dissipation geometry, emphasizing the Fisher-Rao type gradient flows and the interplay with Wasserstein space. Using the dynamic Benamou-Brenier formulation, we reveal a few precise connections between those flow dissipation geometries and commonly used machine learning tools such as Stein flows, kernel discrepancies, and nonparametric regression. In addition, we present analysis results in terms of Lojasiewicz type functional inequalities, with an explicit threshold condition for a family of entropy dissipation along the Fisher-Rao flows. Finally, we establish rigorous evolutionary Γ-convergence for the FisherRao type gradient flows obtained via regression, justifying the approximation beyond static point-wise convergence.
Loading