Recasting Gradient-Based Meta-Learning as Hierarchical Bayes

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. We reformulate the model-agnostic meta-learning algorithm (MAML) by Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as probabilistic inference provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies from Bayesian methods. We use this opportunity to propose an improvement to the MAML algorithm inspired by approximate Bayesian posterior inference, and show increased performance on a few-shot learning benchmark.
  • TL;DR: A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from Bayesian parameter estimation.
  • Keywords: meta-learning, learning to learn, hierarchical Bayes, approximate Bayesian methods

Loading