Fast Adaptation in Generative Models with Generative Matching Networks

Sergey Bartunov, Dmitry P. Vetrov

Nov 04, 2016 (modified: Jan 13, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction.
  • TL;DR: A nonparametric conditional generative model with fast small-shot adaptation
  • Conflicts: google.com, company.yandex.ru, hse.ru, cs.hse.ru, msu.ru, skoltech.ru, ccas.ru
  • Keywords: Deep learning, Unsupervised Learning

Loading