Recasting Gradient-Based Meta-Learning as Hierarchical BayesDownload PDF

15 Feb 2018 (modified: 23 Jan 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.
TL;DR: A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from approximate inference and curvature estimation.
Keywords: meta-learning, learning to learn, hierarchical Bayes, approximate Bayesian methods
Data: [mini-Imagenet](https://paperswithcode.com/dataset/mini-imagenet)
13 Replies

Loading