Online Meta-LearningDownload PDF

Published: 17 Apr 2019, Last Modified: 05 May 2023LLD 2019Readers: Everyone
Keywords: meta learning, few-shot learning, online learning
TL;DR: We introduce the online meta learning problem setting to better capture the spirit and practice of continual lifelong learning.
Abstract: A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks. Two distinct research paradigms have studied this question. Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch. In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation. This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning. We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting. Theoretically, this work provides an O(logT) regret guarantee for the FTML algorithm. Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.
4 Replies

Loading