Keywords: Self-Supervised Learning, Meta-Learning, Few-Shot Learning, Disentangled Representation, Model Adaptation, Task Diversity, Pre-Training
TL;DR: By using disentangled representations to create a diverse set of self-supervised meta-learning tasks, we can train a model to fast adapt to few-shot learning tasks of distinct natures.
Abstract: Meta-learning represents a strong class of approaches for solving few-shot learning tasks. Nonetheless, recent research suggests that simply pre-training a generic encoder can potentially surpass meta-learning algorithms. In this paper, we first discuss the reasons why meta-learning fails to stand out in these few-shot learning experiments, and hypothesize that it is due to the few-shot learning tasks lacking diversity. Furthermore, we propose DRESS, a task-agnostic Disentangled REpresentation-based Self-Supervised meta-learning approach that enables fast model adaptation on highly diversified few-shot learning tasks. Specifically, DRESS utilizes disentangled representation learning to create self-supervised tasks that can fuel the meta-training process. We validate the effectiveness of DRESS through experiments on few-shot classification tasks on datasets with multiple factors of variation. Through this paper, we advocate for a re-examination of proper setups for task adaptation studies, and aim to reignite interest in the potential of meta-learning for solving few-shot learning tasks via disentangled representations.
Submission Number: 57
Loading