Keywords: VAE, incremental learning, continual learning, generative models, variational inference, maxentropy, boosting
TL;DR: Novel algorithm for Incremental learning of VAE with fixed architecture
Abstract: Variational Auto Encoders (VAE) are capable of generating realistic images, sounds and video sequences. From practitioners point of view, we are usually interested in solving problems where tasks are learned sequentially, in a way that avoids revisiting all previous data at each stage. We address this problem by introducing a conceptually simple and scalable end-to-end approach of incorporating past knowledge by learning prior directly from the data. We consider scalable boosting-like approximation for intractable theoretical optimal prior. We provide empirical studies on two commonly used benchmarks, namely MNIST and Fashion MNIST on disjoint sequential image generation tasks. For each dataset proposed method delivers the best results among comparable approaches, avoiding catastrophic forgetting in a fully automatic way with a fixed model architecture.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/boovae-a-scalable-framework-for-continual-vae/code)
0 Replies
Loading