Bootstrapped Meta-LearningDownload PDF

30 Sept 2021 (modified: 05 May 2023)NeurIPS 2021 Workshop MetaLearn ContributedTalkReaders: Everyone
Keywords: meta-learning, meta-gradients, reinforcement-learning, online-learning, few-shot-learning
TL;DR: We propose an algorithm for meta-learning with gradients that bootstraps the meta-learner from itself or another learner.
Abstract: We propose an algorithm for meta-optimization that lets the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under some loss. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that the improvement is related to the target distance. Thus, by controlling curvature, the distance measure can be used to ease meta-optimization. Further, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. The algorithm is versatile and easy to implement. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark, improve upon MAML in few-shot learning, and demonstrate how our approach opens up new possibilities by meta-learning efficient exploration in an epsilon-greedy Q-learning agent.
Contribution Process Agreement: Yes
Author Revision Details: - Improved theorem 1 to give further guidance on the trade-offs involved in choosing the target. - Additional multi-task results to highlight greater efficiency of BMG.
Poster Session Selection: Poster session #2 (15:00 UTC)
0 Replies

Loading