On the Effectiveness of Fine-tuning Versus Meta-RL for Robot Manipulation Download PDF

Published: 17 Nov 2022, Last Modified: 05 May 2023PRL 2022 PosterReaders: Everyone
Keywords: Multi-task Pretraining, Meta-RL, Vision-based robot manipulation
TL;DR: On vision-based, sparse-rewarded robot manipulation, multi-task pretraining followed by fine-tuning on novel tasks performs equally as well as meta-pretraining with meta-adaptation.
Abstract: It is often said that robots should have the ability to leverage knowledge from previously learned tasks in order to learn new ones quickly and efficiently. Meta-learning approaches have emerged as a popular solution to achieve this. However, these approaches have mainly been studied in either supervised learning settings or in full-state, reinforcement learning settings with shaped rewards and narrow task distributions. Moreover, the necessity of meta learning over simpler, pretraining setups, have been called into question within the supervised learning domain. We investigate meta-learning approaches in a vision-based, sparse-reward robot manipulation setting, where evaluations are made on completely novel tasks. Our findings show that, when meta-learning approaches are evaluated on different tasks (rather than different variations), multi-task pretraining with fine-tuning on new tasks can perform equally as well as meta-pretraining with meta test-time adaptation. This is both enlightening and encouraging for future research in pretraining for robot learning, as multi-task learning tends to be simpler and computationally cheaper than meta-reinforcement learning.
1 Reply

Loading