AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing FlowsDownload PDF

Published: 03 May 2019, Last Modified: 22 Oct 2023DeepGenStruct 2019Readers: Everyone
TL;DR: We propose a learning framework for cross-domain translations which is exactly cycle-consistent and can be learned via adversarial training, maximum likelihood estimation, or a hybrid.
Abstract: The goal of unpaired cross-domain translation is to learn useful mappings between two domains, given unpaired sets of datapoints from these domains. While this formulation is highly underconstrained, recent work has shown that it is possible to learn mappings useful for downstream tasks by encouraging approximate cycle consistency in the mappings between the two domains [Zhu et al., 2017]. In this work, we propose AlignFlow, a framework for unpaired cross-domain translation that ensures exact cycle consistency in the learned mappings. Our framework uses a normalizing flow model to specify a single invertible mapping between the two domains. In contrast to prior works in cycle-consistent translations, we can learn AlignFlow via adversarial training, maximum likelihood estimation, or a hybrid of the two methods. Theoretically, we derive consistency results for AlignFlow which guarantee recovery of desirable mappings under suitable assumptions. Empirically, AlignFlow demonstrates significant improvements over relevant baselines on image-to-image translation and unsupervised domain adaptation tasks on benchmark datasets.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1905.12892/code)
3 Replies

Loading