Abstract: In this paper, we explore new approaches to combining information encoded within
the learned representations of auto-encoders. We explore models that are capable
of combining the attributes of multiple inputs such that a resynthesised output
is trained to fool an adversarial discriminator for real versus synthesised data.
Furthermore, we explore the use of such an architecture in the context of semisupervised learning, where we learn a mixing function whose objective is to produce
interpolations of hidden states, or masked combinations of latent representations
that are consistent with a conditioned class label. We show quantitative and
qualitative evidence that such a formulation is an interesting avenue of research.
0 Replies
Loading