Pre-trained Word Embeddings for Goal-conditional Transfer Learning in Reinforcement LearningDownload PDF

Jun 12, 2020 (edited Jul 17, 2020)ICML 2020 Workshop LaReL Blind SubmissionReaders: Everyone
  • Abstract: Reinforcement learning (RL) algorithms typically start tabula rasa, without any prior knowledge of the environment, and without any prior skills. This however often leads to low sample efficiency, requiring a large amount of interaction with the environment. This is especially true in a lifelong learning setting, in which the agent needs to continually extend its capabilities. In this paper, we examine how a pre-trained task-independent language model can make a goal-conditional RL agent more sample efficient. We do this by facilitating transfer learning between different related tasks. We experimentally demonstrate our approach on a set of object navigation tasks.
  • TL;DR: We examine how a pre-trained task-independent language model can make a goal-conditional RL agent more sample efficient
1 Reply

Loading