Meta-learning from sparse recoveryDownload PDF

Published: 10 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop MetaLearn PosterReaders: Everyone
Keywords: meta-learning, compressed sensing
TL;DR: high-order gradients can be safely ignored (bounded) for meta-learning in a regime where restricted isometry property is satisfied
Abstract: Meta-learning aims to train a model on various tasks so that given sample data from a task, even if unforeseen, it can adapt fast and perform well. We apply techniques from compressed sensing to shed light on the effect of inner-loop regularization in meta-learning, with an algorithm that minimizes cross-task interference without compromising weight-sharing. In our algorithm, which is representative of numerous similar variations, the model is explicitly trained such that upon adding a pertinent sparse output layer, it can perform well on a new task with very few number of updates, where cross-task interference is minimized by the sparse recovery of output layer. We demonstrate that this approach produces good results on few-shot regression, classification and reinforcement learning, with several benefits in terms of training efficiency, stability and generalization.
Contribution Process Agreement: Yes
Author Revision Details: We modified Figure 6 as suggested by the review. We also added an explanation for Figure 5 as suggested by the review: "Since meta-learning aims for the performance after a few steps of inner-loop tuning, there might be a regime where the performance of random initialization outperforms that of meta-network." We modified Eq. 7 as suggested by the review.
Poster Session Selection: Poster session #3 (19:20 UTC+1)
0 Replies

Loading