Exploring the Similarity of Representations in Model-Agnostic Meta-LearningDownload PDF

13 Mar 2021 (modified: 05 May 2023)Learning to Learn 2021Readers: Everyone
Keywords: meta-learning, few-shot learning, deep learning analysis, representation learning, learning to learn
Abstract: In past years model-agnostic meta-learning (MAML) has been one of the most promising approaches in meta-learning. It can be applied to different kinds of problems, e.g., reinforcement learning, but also shows good results on few-shot learning tasks. Besides their tremendous success in these tasks, it has still not been fully revealed yet, why it works so well. Recent work proposes that MAML rather reuses features than rapidly learns. In this paper, we want to inspire a deeper understanding of this question by analyzing MAML's representation. We apply representation similarity analysis (RSA), a well-established method in neuroscience, to the few-shot learning instantiation of MAML. Although some part of our analysis supports their general results that feature reuse is predominant, we also reveal arguments against their conclusion. The similarity-increase of layers closer to the input layers arises from the learning task itself and not from the model. In addition, the representations after inner gradient steps make a broader change to the representation than the changes during meta-training.
Proposed Reviewers: Thomas Goerttler, thomas.goerttler@ni.tu-berlin.de Stephan Goerttler, Stephan.Goerttler@ulb.be
TL;DR: Using RSA, we compare the representation obtained by applying MAML on few-shot learning problems.
0 Replies

Loading