- Student First Author: Yes
- Abstract: The study of generalization of neural networks in gradient-based meta-learning has recently great research interest. Previous work on the study of the objective landscapes within the scope of few-shot classification empirically demonstrated that generalization to new tasks might be linked to the average inner product between their respective gradients vectors (Guiroy et al., 2019). Following that work, we study the effect that meta-training has on the learned space of representation of the network. Notably, we demonstrate that the global similarity in the space of representation, measured by the average inner product between the embeddings of meta-test examples, also correlates to generalization. Based on these observations, we propose a novel model-selection criteria for gradient-based meta-learning and experimentally validate its effectiveness.