One Step at a Time: Pros and Cons of Multi-Step Meta-Gradient Reinforcement LearningDownload PDF

Published: 10 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop MetaLearn PosterReaders: Everyone
Keywords: meta-reinforcement learning, meta-gradients
TL;DR: We introduce a mixing multi-step method that successfully trades off bias and variance in meta-gradient estimation for self-tuning in reinforcement learning
Abstract: Self-tuning algorithms that adapt the learning process online encourage more effective and robust learning. Among all the methods available, meta-gradients have emerged as a promising approach. They leverage the differentiability of the learning rule with respect to some hyper-parameters to adapt them in an online fashion. Although meta-gradients can be accumulated over multiple learning steps to avoid myopic updates, this is rarely used in practice. In this work, we demonstrate that whilst multi-step meta-gradients do provide a better learning signal in expectation, this comes at the cost of a significant increase in variance, hindering performance. In the light of this analysis, we introduce a novel method mixing multiple inner steps that enjoys a more accurate and robust meta-gradient signal, essentially trading off bias and variance in meta-gradient estimation. When applied to the Snake game, the mixing meta-gradient algorithm can cut the variance by a factor of 3 while achieving similar or higher performance.
Contribution Process Agreement: Yes
Process Comment: I very much liked the reviewing mentorship process.
Poster Session Selection: Poster session #1 (12:00 UTC), Poster session #2 (15:00 UTC)
0 Replies

Loading