A Preliminary Study on the Feature Representations of Transfer Learning and Gradient-Based Meta-Learning TechniquesDownload PDF

Published: 10 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop MetaLearn PosterReaders: Everyone
Keywords: Meta-learning, Few-shot learning, Transfer learning, Deep learning
Abstract: Meta-learning receives considerable attention as an approach to enable deep neural networks to learn from a few data. Recent studies suggest that in specific cases, simply fine-tuning a pre-trained network may be more effective at learning new image classification tasks from limited data than more sophisticated meta-learning techniques such as MAML. This is surprising as the learning behaviour of MAML mimics that of fine-tuning. We investigate this phenomenon and show that the pre-trained features are more diverse and discriminative than those learned by MAML and Reptile, which specialize in adaptation in low-data regimes of similar data distributions as the one used for training. Due to this specialization, MAML and Reptile may be hampered in their ability to generalize to out-of-distribution tasks, whereas fine-tuning can fall back on the diversity of the learned features.
Contribution Process Agreement: Yes
Author Revision Details: We have noticed that the second experiment was not clear enough yet. Therefore, we included an image to describe the process flow and a new results figure that more clearly displays our findings.
Poster Session Selection: Poster session #1 (12:00 UTC+1), Poster session #2 (16:50 UTC+1)
0 Replies

Loading