Don't Freeze Your Embedding: Lessons from Policy Finetuning in Environment TransferDownload PDF

Published: 23 Apr 2022, Last Modified: 05 May 2023ALOE@ICLR2022 SpotlightReaders: Everyone
Keywords: Transfer, Finetuning, Evaluation
TL;DR: We analyze performance of finetuning only part of a policy network and introduce an emerging evaluation framework for multitask environment suites.
Abstract: A common occurrence in reinforcement learning (RL) research is making use of a pretrained vision stack that converts image observations to latent vectors. Using a visual embedding in this way leaves open questions, though: should the vision stack be updated with the policy? In this work, we evaluate the effectiveness of such decisions in RL transfer settings. We introduce policy update formulations for use after pretraining in a different environment and analyze the performance of such formulations. Through this evaluation, we also detail emergent metrics of benchmark suites and present results on Atari and AndroidEnv.
1 Reply

Loading