On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning Download PDF

Published: 31 Oct 2022, 18:00, Last Modified: 21 Oct 2022, 04:25NeurIPS 2022 AcceptReaders: Everyone
Keywords: Reinforcement Learning, Fine-tuning, Multi-task Learning, Meta Reinforcement Learning, Meta-RL
TL;DR: Multi-task pretraining followed by fine-tuning on novel tasks performs equally as well, or better, than common meta-RL baselines.
Abstract: Intelligent agents should have the ability to leverage knowledge from previously learned tasks in order to learn new ones quickly and efficiently. Meta-learning approaches have emerged as a popular solution to achieve this. However, meta-reinforcement learning (meta-RL) algorithms have thus far been restricted to simple environments with narrow task distributions and have seen limited success. Moreover, the paradigm of pretraining followed by fine-tuning to adapt to new tasks has emerged as a simple yet effective solution in supervised learning. This calls into question the benefits of meta learning approaches also in reinforcement learning, which typically come at the cost of high complexity. We therefore investigate meta-RL approaches in a variety of vision-based benchmarks, including Procgen, RLBench, and Atari, where evaluations are made on completely novel tasks. Our findings show that when meta-learning approaches are evaluated on different tasks (rather than different variations of the same task), multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation. This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL. From these findings, we advocate for evaluating future meta-RL methods on more challenging tasks and including multi-task pretraining with fine-tuning as a simple, yet strong baseline.
Supplementary Material: pdf
15 Replies

Loading