Keywords: Optimal Transport, Monge Map, Dynamic optimal Transport
TL;DR: We employ neural partial optimal transport for subset alignment
Abstract: We propose approaches for static and dynamic neural optimal transport with a relaxed Monge formulation to create optimal transport maps from a source distribution to an optimized distribution constrained to have an upper-bounded density ratio to the target distribution. In machine learning applications, this allows to learn the mappings between imbalanced datasets, such that one dataset can be mapped to a reweighted subset of a target dataset, with the reweighting governed by the density ratio constraint. The density ratio is constrained to lie in $[0, c]$ by the $f$-divergence associated with the indicator function for $[0, c]$, where $c$ denotes the maximum allowable upweighting factor. In the static case, neural networks are employed to parameterize the Monge map between source and selected subset of the target distribution and the dual function for the constraint. In the dynamic case, two networks are also employed: first neural network parametrizes the time dependent potential whose gradient defines the velocity field and terminal value enforces the density ratio constraint, while the second parametrizes the interpolation between the samples from source and optimized terminal distribution satisfying both the density ratio bound and the continuity equation. Since the terminal distribution in subset alignment need not be equal to the target distribution, which is distinct from prior work on dynamic neural optimal transport, we explore an efficient sampling scheme guided by the terminal potential. We apply both the static and dynamic formulations on domain translations problems, and demonstrate that the relaxed problem yields a more meaningful Monge map in cases where there is natural alignment between source and target distributions, but the distributions are imbalanced.
Primary Area: generative models
Submission Number: 23836
Loading