Towards Understanding Generalization in Gradient-Based Meta-Learning

Sep 25, 2019 ICLR 2020 Conference Withdrawn Submission readers: everyone
  • TL;DR: We study generalization of neural networks in gradient-based meta- learning by analyzing various properties of the objective landscape.
  • Abstract: In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We experimentally demonstrate that as meta-training progresses, the meta-test solutions obtained by adapting the meta-train solution of the model to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution. We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning. Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution. We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization.
  • Code: https://github.com/anonymousauthor181/anonymous_repository
  • Keywords: meta-learning, objective landscapes
0 Replies

Loading