Don't Freeze Your Embedding: Lessons from Policy Finetuning in Environment TransferDownload PDF

04 Mar 2022, 07:18 (modified: 16 Apr 2022, 00:49)ICLR 2022 GPL PosterReaders: Everyone
Keywords: Transfer, Finetuning, Evaluation
TL;DR: We analyze performance of finetuning only part of a policy network and introduce an emerging evaluation framework for multitask environment suites.
Abstract: A common occurrence in reinforcement learning (RL) research is making use of a pretrained vision stack that converts image observations to latent vectors. Using a visual embedding in this way leaves open questions, though: should the vision stack be updated with the policy? In this work, we evaluate the effectiveness of such decisions in RL transfer settings. We introduce policy update formulations for use after pretraining in a different environment and analyze the performance of such formulations. Through this evaluation, we also detail emergent metrics of benchmark suites and present results on Atari and AndroidEnv.
1 Reply

Loading