Task Agnostic Continual Learning via Meta LearningDownload PDF

12 Jun 2020 (modified: 05 May 2023)LifelongML@ICML2020Readers: Everyone
Student First Author: Yes
TL;DR: Introducing the What and How framework to apply continual learning on the meta-level in order to resolve conflicts between tasks.
Keywords: Continual Learning, Meta Learning, GAN, Multi-Agent Game
Abstract: Most continual learning approaches implicitly assume that there exists a multi-task solution for the sequence of tasks. In this work, we motivate and discuss realistic scenarios when this assumption does not hold. We argue that the traditional metric of zero-shot remembering is not appropriate in such settings, and, inspired by the meta-learning literature, we focus on the speed of remembering previous tasks. A natural approach to deal with this case is to separate the concerns into what task is currently being solved and how the task should be solved. At each step, the What algorithm performs task inference, which allows our framework to work in absence of task boundaries. The How algorithm is conditioned on the inferred task, allowing for task-specific behaviour, hence relaxing the assumption of a multi-task solution. From the perspective of meta-learning, our framework is able to deal with a sequential presentation of tasks, rather than having access to the distribution of all tasks. We empirically validate the effectiveness of our approach and discuss variations of the proposed algorithm.
0 Replies

Loading