Adversarial Mixup ResynthesizersDownload PDF

Published: 03 May 2019, Last Modified: 22 Oct 2023DeepGenStruct 2019Readers: Everyone
Keywords: autoencoders, adversarial, GANs
TL;DR: We leverage deterministic autoencoders as generative models by proposing mixing functions which combine hidden states from pairs of images. These mixes are made to look realistic through an adversarial framework.
Abstract: In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1903.02709/code)
3 Replies

Loading