Linearizing Visual Processes with Deep Generative Models

Sep 27, 2018 ICLR 2019 Conference Withdrawn Submission readers: everyone
  • Abstract: This work studies the problem of modeling non-linear visual processes by leveraging deep generative architectures for learning linear, Gaussian models of observed sequences. We propose a joint learning framework, combining a multivariate autoregressive model and deep convolutional generative networks. After justification of theoretical assumptions of inearization, we propose an architecture that allows Variational Autoencoders and Generative Adversarial Networks to simultaneously learn the non-linear observation as well as the linear state-transition model from a sequence of observed frames. Finally, we demonstrate our approach on conceptual toy examples and dynamic textures.
  • Keywords: Genearative Adversarial Network, Variational Autoencoder, Wasserstein GAN, Autoregressive Model, Dynamic Texture, Video
  • TL;DR: We model non-linear visual processes as autoregressive noise via generative deep learning.
0 Replies

Loading