Few-Shot Learning by Dimensionality Reduction in Gradient Space

Sep 29, 2021ICLR 2022 Conference Desk Rejected SubmissionReaders: Everyone
  • Keywords: few-shot learning, meta-learning, time series, dimensionality reduction, subspace, preconditioning
  • Abstract: Few-shot learning deals with the problem of learning from limited data. Most research in this field so far focused on settings where the number of tasks is huge, often unlimited. For many real-world settings, however, this assumption does not hold. In this paper, we demonstrate that conventional few-shot learning methods like MAML have drawbacks in these more realistic few-task settings. We introduce a novel few-shot learning method based on the recent insight that after a short burn-in period the training of deep neural networks is mostly restricted to a small subspace of the parameter space. Our method determines this subspace on the meta-training tasks. Subsequently, we restrict learning to this subspace on the meta-test tasks, which regularizes the model in a way that is informed about the learning behavior. We demonstrate that the learned subspaces are indeed meaningful and that our approach is highly task-efficient on a toy dataset from classical mechanics. Furthermore, we achieve new state of the art on a benchmark dataset for few-shot learning with few tasks. Finally, we propose a novel real-world time-series few-shot learning dataset from the realm of hydrology on which we demonstrate the strengths of our method when applied to recurrent neural networks.
1 Reply

Loading