Conditional Meta-Learning of Linear RepresentationsDownload PDF

Published: 31 Oct 2022, Last Modified: 20 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Conditional Meta-Learning, Linear Representation Learning, Statistical Learning Theory, Online Learning
TL;DR: We propose a conditional Meta-Learning algorithm aiming at inferring linear representations for heterogeneous environments of tasks.
Abstract: Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks. The effectiveness of these methods is often limited when the nuances of the tasks’ distribution cannot be captured by a single representation. In this work we overcome this issue by inferring a conditioning function, mapping the tasks’ side information (such as the tasks’ training dataset itself) into a representation tailored to the task at hand. We study environments in which our conditional strategy outperforms standard meta-learning, such as those in which tasks can be organized in separate clusters according to the representation they share. We then propose a meta-algorithm capable of leveraging this advantage in practice. In the unconditional setting, our method yields a new estimator enjoying faster learning rates and requiring less hyper-parameters to tune than current state-of-the-art methods. Our results are supported by preliminary experiments.
Supplementary Material: zip
12 Replies

Loading