Permutation invariant networks to learn Wasserstein metricsDownload PDF

Published: 31 Oct 2020, Last Modified: 22 Oct 2023TDA & Beyond 2020 SpotlightReaders: Everyone
Keywords: wasserstein metric, deep neural networks
TL;DR: We use a permutation invariant network to encode probability distributions such that Euclidean distances between distributions respect Wasserstein distances and explore what properties of the Wasserstein space is learnt by the model.
Abstract: Understanding the space of probability measures on a metric space equipped with a Wasserstein distance is one of the fundamental questions in mathematical analysis. The Wasserstein metric has received a lot of attention in the machine learning community especially for its principled way of comparing distributions. In this work, we use a permutation invariant network to map samples from probability measures into a low-dimensional space such that the Euclidean distance between the encoded samples reflects the Wasserstein distance between probability measures. We show that our network can generalize to correctly compute distances between unseen densities. We also show that these networks can learn the first and the second moments of probability distributions.
Previous Submission: No
Poster: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.05820/code)
1 Reply

Loading