BooVAE: A Scalable Framework for Continual VAE Learning under Boosting ApproachDownload PDF

16 Oct 2019 (modified: 06 Dec 2019)AABI 2019 Symposium Blind SubmissionReaders: Everyone
  • Keywords: VAE, incremental learning, continual learning, generative models, variational inference, maxentropy, boosting
  • TL;DR: Novel algorithm for Incremental learning of VAE with fixed architecture
  • Abstract: Variational Auto Encoders (VAE) are capable of generating realistic images, sounds and video sequences. From practitioners point of view, we are usually interested in solving problems where tasks are learned sequentially, in a way that avoids revisiting all previous data at each stage. We address this problem by introducing a conceptually simple and scalable end-to-end approach of incorporating past knowledge by learning prior directly from the data. We consider scalable boosting-like approximation for intractable theoretical optimal prior. We provide empirical studies on two commonly used benchmarks, namely MNIST and Fashion MNIST on disjoint sequential image generation tasks. For each dataset proposed method delivers the best results among comparable approaches, avoiding catastrophic forgetting in a fully automatic way with a fixed model architecture.
0 Replies